When honesty seems impossible
Self-sealing processes are one of the hardest barriers to break down when seeking organisational improvement.
Most managers will say they want to hear the truth from their employees. Famously, the Australian public service lists a “frank and fearless advice” as a core value. Yet as the Australian Public Service Commission coyly notes in a recent Code of Conduct review, critical views of the infamous ‘Robodebt’ scheme that “were not adequately reflected in the briefing of Ministers” (i.e. were hidden).
The lack of honest counsel provided resulted in the roll-out of administrative practices to social services recipients which were later found to be unlawful, leading to waivers of $1.7b of debt and $2.4b in total compensation paid.
How could such a situation come about? Organisational theorist Chris Argyris calls out these patterns of information withholding behaviours in his article Making the Undiscussable and Its Undiscussability Discussable, where he says:
(paraphrased)
Most individuals are taught through acculturation and socialization, a set of values, action strategies, and skills that lead them to respond automatically to threatening issues by “easing in,” “appropriately covering”, or by “being civilized”. However described, they add up to making threatening issues undiscussable and then to making their undiscussability undiscussable.
Truth may be a good idea when it is not threatening, but when information is threatening, the normal tendency is to hide the fact that this is the case and to act as if you are not hiding the facts.
Argyris defines “threatening information” as anything that triggers defensive routines, behaviours by individuals that seek to avoid embarrassment or threat, or feelings of vulnerability or incompetence.
Most employees learn that they must understand the gap between the espoused theories of action of their manager and organisation; that is, the values and skills they say they apply — and their theory-in-use; that is, the actual strategies used, especially when presented with choices about whether and how to communicate threatening information.
By its definition, this gap between theory and practice is unacknowledged. Worse, once people believe that calling attention to the gap would itself be interpreted as threatening information, it becomes equally impossible to talk about the gap as the cause of an issue. As executives acknowledged to Argyris in research interviews conducted on the topic (The Executive Mind and Double-Loop Learning) after accurately diagnosing such a situation:
Yes [said one] you are right; there is the inconsistency. But what you fail to realize is that none of us would say to [our manager] what we have written down.
No, added another with a smile, we’re too smart to say what we think.
This is what Argyris calls a “self-sealing process”. Once the undiscussability of a problem becomes undiscussible, it may literally be impossible to talk honestly about how to fix it, even where this undiscussability is directly and obviously causing organisational failures.
The results can be particularly devastating when it leads to information presented to top-level decision-makers being compromised. As Argyris explains:
(paraphrased)
Executives have great reasoning skill, but as with most skilled behavior, they rarely think about how they think unless they make an error.
When they do make errors, other people — especially subordinates — may feel it is safest to play down the error, or may ease in the correct information so subtly that the executive will probably not even realize that they made an error.
These actions at the upper levels are especially detrimental to the organization’s capacity to detect and correct errors, to innovate, to take risks, and to know when it is unable to detect and correct error.
Argyris calls these patterns of behaviour Model I thinking. They are particularly common in managerse and executives, and are typified by:
Seeking to be in control of a situation
Maximizing winning and minimizing losing
Suppressing negative feelings
Acting unilaterally to save face — your own and others
Using pseudo-rationality to justify defence of your position
Model I thinking is fine, even efficient, in uncontroversial situations. But as soon as defensive routines are engaged, Model I inevitably leads to self-fulfilling prophecies, self-sealing processes, and escalating error.
Argyris proposes an alternative Model II thinking approach that prioritises:
Provision of valid and validatable information
Minimizing of defensive relationships
Acknowledging that feelings have meanings
Open testing and inquiry over control
Free and informed decision-making
In the article Double Loop Learning in Organizations, Argyris cautions against seeing Model II as the opposite of Model I. Under Model II, people are still encouraged to hold views about what a good outcome looks like, and to use their intellect in service of better problem-solving. However the key innovation of Model II is to recognise that feelings have meanings, and that these meanings have to be examined to work out if feelings are valid or productive.
Done correctly, Model II thinking opens up honest discussions on a problem by looking at the most tangible and objective information elements, and then gradually expanding the scope to more subjective elements of meaning and feeling.
Argyris calls this the ladder of inference:
Provide the observable data that you use to infer your evaluations or attributions, and check to see whether the recipient agrees with your data
Make explicit the cultural meanings that you inferred from the data and seek confirmation from the other person
Explain your judgments and opinions to show why the consequences of the actor’s action were inevitable, without implying any intentions or fault in producing such consequences
Encourage the recipient to express any feelings or ideas they may have about the process
Two concepts that are key to this approach:
An unillustrated evaluation is when you tell a person they are wrong, but without the data and logic of how you arrived at that conclusion
An unillustrated attribution is when a behaviour or belief of a person is assumed, without any data or logic to justify holding that view
Whenever unillustrated evaluations or attributes are used, the target will not know their basis, and therefore inevitably feel “bewildered and misunderstood” — triggering a covert or overt Mode I defensive reaction.
On the other hand, exploring a chain of reasoning through the hierarchy set out above makes it possible for evidence and assumptions to be tested without judgement, focusing on consequences of choices rather than intent. This is a skillful task, often worth being assisted by an expert moderator who can avoid people jumping back into harmful Model I thought patterns and behaviours.
Adopting Model II thinking generally requires conscious effort and buy-in by participants, since most people are more familiar and comfortable with a Model I approach. Indeed in organisations with highly entrenched self-sealing patterns, change can be extremely difficult — sometimes requiring a full-blown crisis to trigger action. Despite the challenges, Model II thinking remains an invaluable model for deeper learning, innovation, and change in organisations and should be closely examined where change appears otherwise intractable.

