Some Perspectives on the Nature Of Meaning
If language is not correct, then what is said is not what is meant. If what is said is not what is meant, then what ought to be done, remains undone. - Confucius
I Am Right You Are Wrong: From This to the New Renaissance: From Rock Logic to Water Logic by Edward de Bono
For several months now I’ve been trying to absorb the concepts presented in the book “I’m Right, You’re Wrong” by Eduard de Bono. It’s a fascinating book in which he attempts to do two things. Firstly he presents a series of mechanisms through which perception may arise, physiological mechanisms that are inevitably tightly bound to the way we think and learn. His model is a simplified version of real neurological processes and involves the complex, non-linear interaction between experience and thought from which meaning is built. He talks in great detail about the thinking patterns that he believes emerge from this system and, by analogy, within us. He also suggests a new vocabularyKind of like a domain language. (catchment, knife-edge discrimination, etc.) for describing and working with them.
Secondly he works backward from his model to design tools and techniques which not only help us to understand our own thinking processes but are also of great benefit for such things as better communication, creativity and learning. For example lateral thinking, six hats and learning backwards techniques are just three well-known and highly practical examples.
With that in mind I guess I’m primed to pick up on similar ideasLike when you buy a new car and suddenly the roads seem full of cars of the same model. Incidentally this phenomenon is also something that de Bono exploits with his thinking tools and calls readiness. and in these last few weeks I have seen a whole series of references to the concept of meaning as seen from very different angles. This blog post is a walk through some of those perspectives and ends with some observations of the practical consequences for software engineering.
Agile Software Development by Alisdair Cockburn
I’d like to start with Alisdair Cockburn’s arguments for building a shared vocabulary in his book “Agile Software Development”. Through a series of examples he demonstrates what he calls the impossibility of communication without shared experience. Fundamentally the vocabulary we use to express knowledge or concepts relies on the labels that we associate with them. For those label to have any meaning and for effective communication to take place then they must be shared with others and refer to the same concepts, something that is only possible through shared experience. This is why he advocates for fluid and continuous team interaction during a project to build that shared experience. This idea, I take it, was the forebearer to the DDD practices advocating a ubiquitousAlthough Kevlin Henney mentions specifically how the even the word ubiquitous is ironically misused in this context. language and clear context for software models.
There Is No Moon If We Close Our Eyes - a reference to Einstein’s question in relation to the subjectiveness of quantum mechanics “Do you really believe that the moon isn’t there when nobody looks?”
Sometimes Cockburn’s descriptions take a very slightly mystical tone (which I like) but Chris Fields and Riccardo Manzotti take it to another level in their conversation with the interesting title “There Is No Moon If We Close Our Eyes” where they talk explicitly about the nature of communication and how it relies entirely on ones own personal perceptionBoth speakers mention the concept of umwelt, something new to me but very relevant here. . Riccardo introduces the idea of the intimacy of knowing, that is knowledge being fundamentally personal to the observer and, reiterating Cockburn’s assertions, impossible, even in principle, to communicate perfectly. This means that communication, and indeed information itself, by its nature, must be related by an overlap of experiences to have any meaning whatsoever. This is very easy for programmers to understand. For example a variable cannot have any meaning outside the context of its use and therefore only represents actual information inside that context.
What Do You Mean? - @KevlinHenney’s @kandddinsky talk on semantics, DDD, epistemology, cognition, semiotics, context, design, code, architecture, creativity, learning, etc.
Kevlin Henney also brings up this point in his talk “What do you mean?” and picks up on the ironic misuse of the word semantics and introduces the field of semiology: the difference between the signified and the signifierRelated, in a way, to the difference between a character and a character encoding, so often confused to the annoyance of non-ASCII users. . These signifiers are usually represented by visual symbols but could take any sensory form. Crucially, symbols can be interpreted differently by different people in different contexts. He gives the example of the different regional meanings of the word “dinner” which have caught me out many a time coming from Cornwall where, like in the north of England, it means the midday rather than the evening meal. He also points out that lazy usage of words like “value” vs “estimated value” have a crucial importance to how we approach a subject. As mentioned earlier the word ubiquitous used in DDD to share context is also inappropriate, although understood.
Beware double meanings when you talk science - It’s great to share your expertise with a wider audience, but it helps if you speak their language. By coincidence another video came up in a similar vein. Words such as theory, determine and significant have very specific meanings in the scientific community and may be different to the meanings that are probably given to them by a member of the public. If allowance is not made for different interpretations then misunderstandings will be inevitable.
On a different tack but related to the significance of symbols there was a twitter storm followed by an article by Gary Markus called “Why Robot Brains Need Symbols” on the necessity of symbols for intelligence. I happen not to agree with his analysis but the idea of symbols comes up again and again in the realm of artificial intelligence. I like to think about the ground breaking work of DeepMind on their AlphaZero project. AlphaZero is a general reinforcement learning system which taught itself to play Go and then Chess. What interests me in this context is not that a weighted neural network plays chess and not its (state-of-the-art) learning algorithm, it’s the fact that the resulting game play exhibits what could be considered rational behaviour. That behaviour emerges from the training and actually doesn’t use symbols to do it. It’s logical play is NOT the logic of mathematics. Or maybe it is, somewhere deep within its neural net.
This deep emergent rationality could be just a hint of what I believe Jordan Peterson is calling our a priori framework, imparted to us not only by experience but also by stories and culture. This is something which is necessary in order for us to be able to make sense of a chaotic (as in overwhelmingly complex) world and reduce the number of variables down to something tractable. This ties back to the ideas of perception presented by de Bono and contrasts starkly with the pragmatic, if rather glib, rationalism presented by Sam Harris in this same debate.
Of course all of these debates and articles talk about much more than has been summarised here. The ideas around the meaning of meaning is an enormous subject area. This post is really not much more than a reminder of these resources and an excuse to chain them together to form some kind of whole. Nonetheless from these ideas come immediate insights which can help us to build software too.
The software industry, although dominated from the US, is a worldwide pursuit and as such it is remarkable that it is as homogenous as it is. Nevertheless it is sometimes difficult for people from different projects in different companies with different academic backgrounds and from different countries, languages and cultures to talk about things from a point of shared experience and, as we saw above, shared experience is fundamental to communication. Examples are numerous and rife.
There seems to be only two ways of coming to that shared understanding. The first is by definition. This is done in rigorous fields like, for example, science and mathematics where definitions are given precise meanings using mathematical language but even then definitions can be confusedDoes the set of natural numbers, ℕ, include 0? The answer is that it doesn’t except for when it does. . Likewise in software engineering where terms like REST which were defined with a high degree of formality are so misunderstood that the misunderstanding is now the de-facto meaning. I’m wondering whether we could come up with a list of readily confused software engineering words and phrases like was done, for example, in this articleFifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases - Scott O. Lilienfeld et al. from the field of psychology. REST, estimation, agile and many others would be on that list.
The other way is to reinforce shared experience is by bringing to light discrepancies. To do that we need have a deeper debate on the nature of meaning and to be aware of multiple usages of the same signifiers and use empathy and imagination to consider and reduce misunderstanding.
I’m particularly sensitive to my misunderstandings of others and their misunderstandings of me (and it happens a lot!) but I’m also curious about how considerations such as these about communication in this ever more complex, interconnected and abstract world of meaning might pave the way to deeper understanding and even better collaboration. I’m going to try and do my bit anyway.