There are still two slots available for the alloy workshop! I’ve been hard at work adding a bunch of teaching innovations to the class, which I wanted to talk about this time, but something more interesting came up.
This essay says that inheritance is harmful and if possible you should “ban inheritance completely”. You see these arguments a lot, as well as things like “prefer composition to inheritance”. A lot of these arguments argue that in practice inheritance has problems. But they don’t preclude inheritance working in another context, maybe with a better language syntax. And it doesn’t explain why inheritance became so popular in the first place. I want to explore what’s fundamentally challenging about inheritance and why we all use it anyway.
My favorite essay on inheritance is Why inheritance never made any sense. In it Graham argues that there’s actually three ways that we mean inheritance:
- Ontological inheritance is about specialisation: this thing is a specific variety of that thing (a football is a sphere and it has this radius)
- Abstract data type inheritance is about substitution: this thing behaves in all the ways that thing does and has this behaviour (this is the Liskov substitution principle)
- Implementation inheritance is about code sharing: this thing takes some of the properties of that thing and overrides or augments them in this way.
Since conventional class-based inheritance conflates these three types of inheritance, it doesn’t really satisfy any of them properly. This is what makes it so challenging to use in practice. Things like abstract data types and modules and such only hit one of these kinds at a time, properly separating the concerns and making them easier to use.
So then why do we use inheritance instead of ADTs and modules and stuff? And that’s where we need to look at the history.
Where did inheritance come from? As with many things in OOP, it comes from SIMULA-67. The creators, Dahl and Nygaard, introduced objects as a generalization of their SIMULA-I’s simulation syntax. That’s important to understand why inheritance works the way it does: it was originally designed for use in simulation software. The first examples of inheritance are for modeling customer orders and a jobshop simulation!
SIMULA had a big influence on other object languages. Smalltalk credits it for a lot of the design decisions. This meant that inheritance was pretty entrenched by the point that the alternative started appearing. And that’s the key point: inheritance came first. The idea of subtypes, or abstract data types, come from Barbara Liskov’s CLU. That was six years later, in part based on her knowledge of SIMULA. Notably, CLU was a research language, not an industry language. ADTs only entered widespread industry use with Java interfaces, about two decades after CLU.1 In the intervening thirty years, inheritance was established by C++, Smalltalk, and Object Pascal.
Similarly, modularization was a concept for a while but modules only appeared as a first-class language component with Modula, which came out in 1975. Even today most industry languages don’t have proper modules that encourage code specialization. Most languages with “modules” are really just namespaces.
Time to take off the Fact Hat and put on the Speculation Hat. It seems to me that ADTs and modules were in part influenced by the existence of inheritance. People saw the idea and try to separate out the various concerns. This happens quite often in language design, and in fact any sort of technical development. Often people will introduce a practical innovation that sort of blends together a bunch of abstract concepts without knowing about those concepts beforehand. It’s only once the innovation is used in practice and people get a better “intuition” for it that they start to see the abstract concepts and try to tease them apart. Of course, once something becomes established it’s very hard to get rid of. And because a lot of languages started out using inheritance, it became a common thing.
See also: everything else in software.
It’s also notably the first case where we put a syntactic relationship between two classes. Surprisingly, it still remains one of the only ways to relate two classes. You’ve basically got interfaces, traits/mixins, and inheritance, and that’s it. I suspect this is because most object relationships are domain-based, and while language syntax tries to remain generality-based.
There is one part of the story we need to talk about though: can we do inheritance in a better way? The key language here is Eiffel, by Bertrand Meyer. Eiffel used to be a rising giant in the OOP world but has mostly faded into irrelevance now. Among other things, almost all of the relationships were inheritance-based. There were no interfaces, no modules, no traits, etc. You’re even expected to use multiple inheritance quite regularly.
This isn’t as bad as it sounds. Eiffel was designed from the start to avoid a lot of the pitfalls that you often see with inheritance. For example it avoided the “diamond problem” with a robust renaming mechanism. It also had a really interesting feature that made its inheritance a lot more powerful: code contracts. You could place preconditions and postconditions on methods that would be checked and every call. If you inherited the class, though, Eiffel could guarantee that you could only weaken preconditions and only strengthen postconditions. This means that you can substitute a child class anywhere the parent class would be accepted and guarantee that all of the invariants were still satisfied. That’s a pretty cool language feature!
Incidentally, Meyer also coined the “open closed principle”, which is the O in SOLID. So his thinking on languages did have at least some effect on our modern development, if somewhat indirect.
Unfortunately, Eiffel also had a minor problem with inheritance. You see, Eiffel is statically typed. The input types of a method’s parameters are effectively preconditions. This means, to be type safe, you should only be able to “weaken the preconditions” on an inherited method’s parameters’ types, like say “instead of taking any natural number, this method can now take any integer”. This is equivalent to replacing a type with its supertype, or “contravariance”. But Meyer thought that “wasn’t useful” and made method parameters covariant, replaceable with their subtypes. This makes the type system unsound.
They call this the “CATcall” problem and still haven’t figured out how to fix it.2
So yeah, inheritance has problems. I mean of course there are cases where you can safely use it, and there are cases where it’s the right choice, but it definitely shows signs of being a “first-generation solution”.
Probably something bigger here worth exploring but that’s getting further away from “why inheritance”, so I’ll leave that for another newsletter. Cheers!