Dependency Inversion Principle

David Morales David Morales
/
'SOLID' text in metal and fire, and 'Dependency Inversion Principle' over a space background.

The Dependency Inversion Principle is the “D” in SOLID, and probably the one with the most confusing definition of the five. Let’s unravel what it really means and how to apply it pragmatically in Ruby.

A Problematic Definition

The original definition has two parts:

A. High-level modules should not depend on low-level modules. Both should depend on abstractions.

B. Abstractions should not depend on details. Details should depend on abstractions.

If you’ve read this several times and still don’t fully understand it, you’re not alone. The problem is that this definition uses very specific terminology from the C++ and Java context of the 1990s, where “abstraction” explicitly means interfaces or abstract classes.

Additionally, the word “inversion” in the name creates confusion. What exactly are we inverting? The term refers to a common practice of that era where high-level API design was conditioned by low-level APIs. Instead of thinking order.save, programmers wrote code that reflected low-level operations like file.write_bytes_to_disk.

Interestingly, in more recent texts about this principle, the word “inversion” barely appears outside the name itself. Part “A” of the definition is largely redundant if we properly understand part “B”.

We can simplify DIP to something more digestible:

Code should depend on stable behaviors (interfaces), not concrete implementations.

Or put another way: program against what something does, not how it does it.

A Bit of History: Why This Principle Exists

To understand DIP, it helps to know the context in which it emerged. In the 1990s and 2000s, Java and C++ developers encountered a recurring problem when writing unit tests.

Imagine a class that internally uses a PaymentGateway:

public class ShipmentProcessor {
public void process(Shipment shipment) {
PaymentGateway gateway = new PaymentGateway();
gateway.charge(shipment.getTotal());
// ...
}
}

How do you test it without making real credit card charges? You need to replace PaymentGateway with a mock. The most straightforward way would be to intercept the call to new so it returns the mock instead of the real object, without touching the ShipmentProcessor code. But in Java, new is a reserved language keyword, not a method you can override. There’s no way to override it.

The solution was to pass dependencies from the outside (dependency injection):

public class ShipmentProcessor {
private PaymentGateway gateway;
public ShipmentProcessor(PaymentGateway gateway) {
this.gateway = gateway;
}
public void process(Shipment shipment) {
this.gateway.charge(shipment.getTotal());
// ...
}
}

Now you could pass a mock in tests.

For greater flexibility, interfaces were also separated from implementations. Thus was born the famous pattern of one interface per class, with the implementation separate:

It’s important to remember that this pattern arose to solve a specific technical problem in certain languages, not from deep reflection on software architecture.

What About Ruby?

In Ruby, the original problem doesn’t exist. We can replace methods of any class at runtime. Here’s what the class that internally uses a PaymentGateway would look like:

class ShipmentProcessor
def process(shipment)
gateway = PaymentGateway.new
gateway.charge(shipment.total)
# ...
end
end

Since new is a normal method called on an object, and Ruby allows dynamically modifying any method, we can easily configure PaymentGateway.new to return a mock during tests, without needing to inject dependencies.

This doesn’t mean DIP is irrelevant in Ruby, but it does mean we should apply it with different criteria: not out of technical obligation, but when it genuinely improves our design.

The Problem: Coupling to Concrete Details

Setting testing aside, there’s a more fundamental reason related to the principle’s description: avoiding coupling high-level code to the implementation details of low-level code.

Let’s imagine we’re developing the notification system for an online store. When a customer places an order, we want to notify them through different channels: email, SMS, or push notification.

A first approach might be this:

class OrderNotifier
def notify(order, channel)
case channel
when :email
EmailService.new.send_email(to: order.customer.email, subject: "...", body: "...")
when :sms
TwilioClient.new(ENV["TWILIO_KEY"]).send_message(phone: order.customer.phone, text: "...")
when :push
FirebaseClient.new.send_notification(token: order.customer.device_token, title: "...")
end
end
end

This code works, but it has a serious problem: OrderNotifier knows the implementation details of each channel. It knows that Twilio requires an API key, that Firebase uses device tokens, that email needs a subject and body…

The Problem Gets Worse with Each New Requirement

The problematic code doesn’t look so bad at first. It’s when the system grows that it becomes unmanageable.

Some customers want notification by email and SMS, marketing wants to add WhatsApp, etc. Each channel has its own peculiarities: different credentials, different message formats, and a specific API interface. OrderNotifier must know the details of all of them.

The result is tight coupling: OrderNotifier can’t function without knowing the specific APIs of Twilio, Firebase, WhatsApp… And you can’t test it in isolation without mocking each of those external services.

The Solution: Implicit Interfaces

In languages like Java or C#, we’d solve this by creating a Notifier interface with a notify method, and each channel would implement it. This would be the implementation of polymorphism, since multiple objects respond to the same message with the same semantics.

In Ruby, this is achieved through duck typing: the contract isn’t written in the code, but exists as a convention.

Let’s redesign our system. Each notifier encapsulates its own details and exposes a common interface:

class EmailNotifier
def notify(order)
EmailService.new.send_email(to: order.customer.email, subject: "...", body: "...")
end
end
class SmsNotifier
def notify(order)
TwilioClient.new(ENV["TWILIO_KEY"]).send_message(phone: order.customer.phone, text: "...")
end
end
class PushNotifier
def notify(order)
FirebaseClient.new.send_notification(token: order.customer.device_token, title: "...")
end
end

Now OrderNotifier can be drastically simplified:

class OrderNotifier
def initialize(notifiers)
@notifiers = notifiers
end
def notify(order)
@notifiers.each { |notifier| notifier.notify(order) }
end
end

OrderNotifier no longer knows anything about emails, SMS, or push notifications. It only knows it has collaborators that respond to notify. When the time comes to add Telegram, we’ll create TelegramNotifier with its notify method and that’s it—we won’t need to touch OrderNotifier.

We’ve inverted the dependency. Before, high-level code depended on low-level details. Now, both the high-level code and the low-level modules depend on a common abstraction: the implicit notify(order) interface (through duck typing).

Documenting Implicit Interfaces

A disadvantage of implicit interfaces is that there’s no compiler to verify the contract. If someone creates a SlackNotifier whose method is called send_notification instead of notify, the error will only appear at runtime.

There are several strategies to mitigate this. The most explicit is to use a module as documentation:

module Notifier
def notify(order)
raise NotImplementedError, "#{self.class} must implement #notify"
end
end
class EmailNotifier
include Notifier
def notify(order)
# ...
end
end

Now, if the method isn’t implemented, a clear error will be raised instead of a generic NoMethodError.

Other ways to achieve this include documentation in comments or contract tests with shared_examples in RSpec.

Dependency Injection: When to Use It and When Not To

You’ll have noticed that in the previous example, OrderNotifier receives its collaborators through the constructor. This pattern is called dependency injection and is a concrete way of applying DIP.

But be careful: dependency injection is not automatically synonymous with good design. As Jeremy Evans points out in Polished Ruby Programming, dependency injection adds complexity, and we should only use it when we really need it.

Let’s look at an example where it does make sense. Imagine a price conversion service:

class PriceConverter
def initialize
@rates = ExchangeRateAPI.fetch_current_rates
@fetched_at = Time.now
end
def convert(amount, from:, to:)
# ...
end
def rates_fresh?
Time.now - @fetched_at < 3600
end
end

This code has two problems for testing: each instantiation makes a real HTTP call, and rates_fresh? depends on Time.now, so its behavior changes depending on when we run the test.

We could use aggressive mocks, but there’s a more elegant solution: allow these dependencies to be injected with sensible defaults:

class PriceConverter
def initialize(rates: ExchangeRateAPI.fetch_current_rates, clock: Time)
@rates = rates
@clock = clock
@fetched_at = @clock.now
end
# ...
end

In production, the code is used exactly as before: PriceConverter.new. But in tests, we can inject controlled data:

converter = PriceConverter.new(
rates: { "USD" => { "EUR" => 0.92 } },
clock: FakeClock.new(Time.new(2024, 6, 15))
)

Tests are now fast, deterministic, and don’t depend on external services.

As you can see, dependency injection is about extracting the new from inside the class and moving it to the initializer arguments. In other words, the complexity isn’t eliminated, it’s just moved. Now someone (an initializer, a factory, client code) is responsible for “wiring” the pieces together.

When NOT to Use Dependency Injection

Following Evans’ advice, you shouldn’t inject dependencies “just in case.” Ask yourself if you need to swap the implementation—for example, to use a mock in tests, a fake service in development, or different providers in production.

If the answer is no, keep the code simple. You can refactor later if the need arises.

Conclusion

The Dependency Inversion Principle can be summarized as designing your code to depend on behaviors (interfaces), not concrete implementations.

In Ruby, this translates to:

  1. Leverage duck typing: If multiple objects respond to the same message with the same semantics, you have an implicit interface.
  2. Encapsulate the details: Each class should hide how it does things and expose only what it does.
  3. Inject dependencies judiciously: Do it when you need real flexibility or when it facilitates testing. Don’t do it “just in case.”
  4. Remember you can refactor later: Don’t add preventive abstractions. Ruby is a dynamic language that doesn’t use interfaces, which makes it easy to introduce them when you actually need them.

The ultimate goal isn’t to follow a principle to the letter, but to write code that’s easy to understand, test, and modify. DIP is a tool to achieve that, not an end in itself.

Test your knowledge

  1. What is the core idea behind the Dependency Inversion Principle?

  1. In Ruby, why is dependency injection less necessary for testing than in Java?

  1. If OrderNotifier directly instantiates EmailService.new inside its method and calls send_email, what problem does this create?

  1. When should you use dependency injection in Ruby?

  1. What is the relationship between dependency inversion and dependency injection?