Symbolic Logic:Programming:Criticisms of Object Oriented Programming

Many OO languages are compromised in there implementation of set theory, which has lead to problems with the use of inheritance. In particular imperative languages have some compromises that undermine the logical structure of set theory.

Part of this compromise is the requirement in common OO languages (C++, Java, Eiffel ...) that a call to a function signature should invoke a single function body. This is not a logical consequence of set theory.

A function in an inheriting class overriding the implementation in the base class is a logical exception system. An exception system is where I make a statement, but then qualify it with an exception. For example,
 * "I am always good ... except for last night"

Exception systems play a role in the evolution of knowledge. But any statement, that is qualified by an exception cannot be relied on as always true in every situation. Exception qualification should only be used as "the exception", not as the bedrock of solid re-usable logic.

The exception system breaks down when there is multiple inheritance because it cannot resolve to a single function to call ( the diamond problem).

Problems with the implementation of multiple inheritance in imperative languages has lead to single inheritance languages (Java, C#, ...). This further compromises the logical underpinnings of OO in those languages. Single inheritance is equivalent to saying that an object may only be a member of one set hierarchy. Clearly this is not true, so single inheritance is corruption of inheritance.

The Diamond Problem
The exception system implicit in the overriding of virtual functions is designed to give a single function body to execute when a function is called. The Diamond Problem indicates a case where it fails to do this. If a class inherits D inherits from B and C which inherit from A, forming a diamond shaped inheritance diagram. A method signature defined in A, but overridden in B and C becomes ambiguous in D.

The problem arises because inheritance, as implemented in imperative languages, cannot except that a call may invoke multiple function bodies.

As functions are implementations of rules in logic this is rather strange. Suppose we have a set of objects that beep when poked. Also we have a set of objects that blink when poked. The objects in the intersection of the blinkers and beepers must both blink and beep when poked. In logic,


 * $$x \in Blinkers \and x.Poke \implies x.Beep \!$$
 * $$y \in Beepers \and y.Poke \implies y.Blink \!$$
 * $$BlinkBeepers = Blinkers \cap Beepers \!$$
 * $$z \in BlinkBeepers \and z.Poke \implies z.Beep \and z.Blink \!$$

Using classes,

Then from the logic, this the above class declaration is equivalent to,

Current OO languages dont allow a function call to invoke two bodies because there is no defined order to call them in. In logic this is not a problem. In an imperative programming language this is a big problem, as the behaviour of a program may then not be deterministically defined. We see in the above case there is no basis to believe that the Beep should before the Blink or vice versa.

The Diamond Problem has lead to the removal of multiple inheritance in successor languages to C++ like Java and C#. But in reality multiple inheritance is as natural as an object being a member of multiple sets.

The order problem with Output Roles and Services
The order problem described above is re-introduced by the addition of output roles and services to logic programming.

Output roles allow us to write code that looks imperative but is re-interpreted (after the allocation of extra variables) as pure logic. In the above example we may want to add logging to the Blink and Poke methods. This would imply the output state of the log file being output by the Blink function and given to the Beep function (or vice versa if the Beep were before the blink).

Services also may lead to slightly different behaviour based on the order of the function calls.

Services and output roles should be used sparingly. Output roles may introduce an order of processing where none is required, which blocks parrallel processing. Services are intended only for getting around the strict laws of mathematics, for use in specific situations.

So the problems with output roles and services are not sufficient to make us want to block a function call invoking multiple function bodies.

The Return Type
The other criticism levelled at multiple inheritance is that it makes the interface brittle because the return type cannot be changed in an overridden function. Clearly in a Logic Programming paradigm, the return value is regarded as being equivalent to another parameter. So the return type should be part of the signature.

Too Closely Coupled
Some people say that class inheritance leads to close coupling between the inheriting and the inherited class. That is, the developer needs to know how the inherited class works to use inheritance properly. I think this is a perception because of the structural problems with inheritance as implemented in current imperative languages.

If inheritance is simply an implementation of the subset relation I find it hard to believe that there can be serious problems with it. It is a simple subset relationship.

Inheritance that is used without there being a genuine "is a" relationship is clearly incorrect because it goes against the underlying logic.

Indirect Logic
Inheritance implements set theory. The application of a rule two a subset (which corresponds to the call of a function on an inherited class) depends on two statements,
 * Membership of a set.
 * The rule itself.

It is more complex than a direct statement of the rule to the set. However this complexity leads to greater re-use. There is a trade-off here and, unfortunately, different programmers feel differently about the trade off.

One solution may be for the Integrated Development Environment to display inherited functions, as they apply to the class (while protecting them from accidental modification). This allows the best of both worlds. The bottom up programmer focusing on the details of execution can see exactly what happens, while the top down programmer can gain the code re-use and the higher level abstraction.

Conclusion
The use of inheritance where there is no actual subset or "is a" relationship is bad. Any code that works now but is misleading about the nature or structure of the problem is going to make continued development of the code more difficult. Used and implemented properly inheritance is a powerful tool.