The written word is a beautiful thing. We string together series of words into sentences and paragraphs in an effort to convey feelings, sentiments, and viewpoints—to paint mental pictures and sway opinions. There’s imprecision to language, though—a flexibility in meaning to words and phrases that allows for interpretation, enabling everyone to experience language in a very different way, just like we do our senses. In literature, this has a marvelous effect, allowing us to create our own versions of an author’s world in our heads. A thoughtful author understands this and relishes in people’s own personal interpretations of their work.
In business and critical analysis, however, this elasticity of language can have dire effects. When you are relying on someone’s written report in a decision-making process, a word like “probable” or a phrase like “a low chance of” can be wildly misinterpreted. Imagine if your idea of probable is anything over 60%, but the reader of your report interprets it as 90%.
There’s likely no sector more vulnerable to this misinterpretation of language than the intelligence industry. Sherman Kent, a Yale history professor and division chief within the CIA and its precursor the OSS, saw these shortcomings of language and the potentially disastrous consequences they could have on decision making. In regard to this vulnerability, Kent penned a now classic piece addressing this issue entitled, “Words of Estimative Probability.”
Within the essay, Kent describes a situation in 1951 where the USSR appeared to be interested in invading Yugoslavia. The following statement appeared in an analyst’s report with regard to the emerging situation: “Although it is impossible to determine which course the Kremlin is likely to adopt, we believe that the extent of Satellite military and propaganda preparations indicates that an attack on Yugoslavia in 1951 should be considered a serious possibility.” (Emphasis added.)
Kent later recounts a meeting with a senior decision-maker which addressed this vagueness of language:
A few days after the estimate appeared, I was in informal conversation with the Policy Planning Staff’s chairman. We spoke of Yugoslavia and the estimate. Suddenly he said, “By the way, what did you people mean by the expression `serious possibility’? What kind of odds did you have in mind?” I told him that my personal estimate was on the dark side, namely, that the odds were around 65 to 35 in favor of an attack. He was somewhat jolted by this; he and his colleagues had read “serious possibility” to mean odds very considerably lower. Understandably troubled by this want of communication, I began asking my own colleagues on the Board of National Estimates what odds they had had in mind when they agreed to that wording. It was another jolt to find that each Board member had had somewhat different odds in mind and the low man was thinking of about 20 to 80, the high of 80 to 20. The rest ranged in between. (From “Words of Estimative Probability” by Sherman Kent)
This means that “a serious possibility” was being interpreted as meaning anywhere from 20% to 80% in likelihood. It’s quite clear from this example just how useless a phrase like that can be. In response to this eye-opening experience, Kent began a campaign to assign percentage values to report language to prevent this type of misinterpretation. After one failed attempt at creating eleven gradations of language, which was overly complex and too rigidly precise in his mind, he and a colleague came up with the following five levels of certainty:
|The General Area of Possibility
|give or take about 6%
|give or take about 12%
|give or take about 10%
|Chances about even
|give or take about 10%
|give or take about 5%
|Almost certainly not
Unfortunately for Kent, his proposition received a lot of pushback and was not adopted by the Central Intelligence Agency. The thing is, while fuzzy language obfuscates the author’s meaning, there is safety in this very fact. If the author is right, she can trump up her predictive prowess, if she’s wrong, she has left herself sufficient leeway to avoid being pilloried.
It’s true, while we want people to give us their best estimates, we also want to hold them accountable and sometimes use them as scapegoats. Think about the wrath a weatherman receives when he predicts a 10% chance of precipitation, and then it rains. People are irate at this perceived inaccuracy, despite the fact that what he was trying to convey was, on days like today, with the given conditions, it will rain one in ten times.
This need to protect ourselves, unfortunately, is the double-edged sword of precise language. However, it’s quite apparent how detrimental using ambiguous language can be, whether we’re referring to intelligence agencies or corporations. Think about all the ways you might be misinterpreted in reports, competitive analyses, correspondence, etc. if one simple phrase like “serious possibility” is interpreted in such different ways.
While it’s probably impractical to come up with a rigid wording rubric like Kent has proposed, it does provide impetus for all of us to be as explicit as possible in our communication and ask whenever we think there is room for misinterpretation. Unless we’re corresponding with robots, this is the only reliable way to prevent things from becoming lost in translation, and for subsequently making misinformed decisions. Likewise, our move to more precision should be coupled with more understanding for the author—an acceptance that estimates are just that, estimates, and not absolute truths. By allowing for people that leeway, we can empower them to be more precise in their communication.
Interested in learning more about communication? Check out our Communication for Leaders program.
- Team Building Lessons – How Much Time Should We Spend Planning? - May 7, 2018
- Team Building Lessons – Do We Try a New Approach? - May 2, 2018
- Team Building Lessons - May 2, 2018
Interested in learning more about our team building and training options?
We'd love to talk to you!Request Proposal