In the current political climate someone had to say it – we all seem convinced we’re right about politics, religion or science these days. Of course we are, we’re certain – and everyone disagrees. Perhaps it has always been so, but how can we think this way? Each of us thinks we are right. But what makes us so sure of ourselves – the sheer pigheadedness of others compared to our own brilliance? Is everyone else just stupid, or willfully ignoring the obvious truth, or just thinking badly – unable to work out the logic of most everything important?
Maybe it’s none of those things. Maybe it’s that certainty has nothing to do with thinking at all – it’s a different sort of thing, an involuntary neurological function, more akin to feeling than anything else. That’s what Robert Burton contends in The Certainty Epidemic, an item that appeared in Salon on Friday, February 29, an excerpt from his new book, On Being Certain: Believing You Are Right Even When You’re Not (St. Martin’s Press; February 5, 2008).
Spending the week attending to events in the presidential race, watching, with bemused alarm, the two remaining Democrats have at each other in that debate in Cleveland, listening to the president at a press conference say once again that unless the telecom companies are provided legal immunity for having helped the government monitor all communications of all sorts in the past, and are granted immunity for such monitoring that they will do in the future, then certainly we’ll all die – even if such monitoring is perfectly legal – makes one wonder what’s going on here. Everyone is certain of everything. And unless you believe everything, no matter how contradictory, such bold certainty just generates massive uncertainty. All parties assert they are right. They are certain, and that certainty is held up as proof they are right. Is certainty proof of validity? Get serious – ask anyone who bet on the Patriots in the Super Bowl.
That’s why it may be important to turn to Burton, as he’s been Board-Certified in Neurology by the American Board Psychiatry and Neurology. It’s high time to turn to a neurologist, someone who understands how the brain works.
Here’s his take on the epidemic:
Certainty is everywhere. Fundamentalism is in full bloom. Legions of authorities cloaked in total conviction tell us why we should invade country X, ban “The Adventures of Huckleberry Finn” in schools, eat stewed tomatoes, how much brain damage is necessary to justify a plea of diminished capacity, the precise moment when a sperm and an egg must be treated as a human being, and why the stock market will revert to historical returns. A public change of mind is national news.
He asks why this should be so. He dismisses the idea that it’s all a matter of stubbornness, arrogance or misguided thinking. Yes, you can leave that to the partisans, the folks who say everyone on the left has Bush Derangement Syndrome and the folks on the left who say the conservatives have been drinking deeply of the Bush Kool-Aid, or the neoconservative Kool-Aid or whatever. Each side says the other side is not thinking, just reacting emotionally and irrationally. Each side says they alone are thinking. But what if the problem is that no one is thinking and the problem is rooted in brain biology? That’s the idea here:
Since my early days in neurology training, I have been puzzled by this most basic of cognitive problems: What does it mean to be convinced? This question might sound foolish. You study the evidence, weigh the pros and cons, and make a decision. If the evidence is strong enough, you are convinced there is no other reasonable answer. Your resulting sense of certainty feels like the only logical and justifiable conclusion to a conscious and deliberate line of reasoning.
But modern biology is pointing in a different direction. It is telling us that despite how certainty feels, it is neither a conscious choice nor even a thought process. Certainty and similar states of “knowing what we know” arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. Feeling correct or certain isn’t a deliberate conclusion or conscious choice. It is a mental sensation that happens to us.
So certainty has involuntary neurological roots. If so, this has major implications:
If science can shame us into questioning the nature of conviction, we might develop some degree of tolerance and an increased willingness to consider alternative ideas – from opposing religious or scientific views to contrary opinions at the dinner table.
So Burton examines the mental sensation of certainty. He calls it the “feeling of knowing.” And that makes sense:
Everyone is familiar with the most commonly recognized feeling of knowing. When asked a question, you feel strongly that you know an answer that you cannot immediately recall. Psychologists refer to this easily recognizable feeling as a tip-of-the-tongue sensation. The frequent accompanying comment as you scan your mental Rolodex for the forgotten name or phone number is: “I know it but I just can’t think of it.” You are aware of knowing something, without knowing exactly what this sensation refers to. The most profound feeling of knowing is the “aha,” a spontaneous notification from a subterranean portion of our mind, an involuntary all-clear signal that we have grasped the heart of a problem. It isn’t just that we can solve the problem; we “know” that we understand it.
He offers a good example for that, a self-test you should read, but then he gets into what he calls the enormously complicated details of neurobiology. And here he borrows the term, “hidden layer” from the artificial intelligence community:
By mimicking the way the brain processes information, A.I. scientists have been able to build artificial neural networks (ANNs) that can play chess and poker, read faces, recognize speech and recommend books at Amazon.com. While standard computer programs work line by line, yes or no, all eventualities programmed in advance, the ANN takes an entirely different approach. The ANN is based upon mathematical programs that are initially devoid of any specific values. The programmers only provide the equations; incoming information determines how connections are formed and how strong each connection will be in relationship to all other connections. There is no predictable solution to a problem – rather, as one connection changes, so do all the others. These shifting interrelationships are the basis for “learning.”
That sounds a lot like what the administration is doing with all domestic electronic communication – it’s data-mining, something those of us in systems know quite well. It’s looking for the hidden, even if you don’t know what it is, or, more precisely, because you don’t know what it is:
With an ANN, the hidden layer is conceptually located within the interrelationships between all the incoming information and the mathematical code used to process it. In the human brain, the hidden layer doesn’t exist as a discrete interface or specific anatomic structure; rather, it resides within the connections between all neurons involved in any neural network. A network can be relatively localized or widely distributed throughout the brain. Proust’s taste of a madeleine triggered a memory that involved visual, auditory, olfactory and gustatory cortices – the multisensory cortical representations of a complex memory. With a sufficiently sensitive fMRI scan, we would see all these areas lighting up when Proust contemplated the madeleine.
So this is the way the brain processes information:
It is in the hidden layer that all elements of biology (from genetic predispositions to neurotransmitter variations and fluctuations) and all past experience, whether remembered or long forgotten, affect the processing of incoming information. It is the interface between incoming sensory data and a final perception, the anatomic crossroad where nature and nurture intersect. It is why your red is not my red, your idea of beauty isn’t mine, why eyewitnesses offer differing accounts of an accident or why we don’t all put our money on the same roulette number.
So this powerful feeling of knowing – certainty – is a pretty much unconscious calculation of correctness. As Burton puts it – “The greater the likelihood of correctness, as determined by your unconscious, the stronger the sense of certainty.”
And you thought you were thinking things trough. You weren’t.
Burton comments on that bestseller, Blink, that book says your gut feelings are “perfectly rational,” that this is just “thinking that moves a little faster and operates a little more mysteriously” than conscious thought. Burton says Malcolm Gladwell has it a bit wrong. It’s not thinking at all – “Gut feelings and intuitions, the Eureka moment and our sense of conviction, represent the conscious experiences of unconsciously derived feelings.”
This is how we learn. It’s all a matter of those unconsciously derived feelings:
When our body needs food, we feel hunger. When we are dehydrated and require water, we feel thirsty. If we have sensory systems to connect us with the outside world, and sensory systems to notify us of our internal bodily needs, it seems reasonable that we would also have a sensory system to tell us what our minds are doing.
To be aware of thinking, we need a sensation that tells us that we are thinking. To reward learning, we need feelings of being on the right track, or of being correct. And there must be similar feelings to reward and encourage the as-yet unproven thoughts – the idle speculations and musings that will become useful new ideas.
It’s all unconscious, really, or so it would seem here. We just think we’re thinking:
To be an effective, powerful reward, the feeling of conviction must feel like a conscious and deliberate conclusion. As a result, the brain has developed a constellation of mental sensations that feel like thoughts but aren’t. These involuntary and uncontrollable feelings are the mind’s sensations; as sensations they are subject to a wide variety of perceptual illusions common to all sensory systems.
If so, then Burton suggests we pay attention to what neuroscience is telling us about the limits of knowing, how what we think isn’t entirely within our control, and use it in our lives:
Perhaps the easiest solution would be to substitute the word “believe” for “know.” A physician faced with an unsubstantiated gut feeling might say, “I believe there’s an effect despite the lack of evidence,” not, “I’m sure there’s an effect.” And yes, scientists would be better served by saying, “I believe that evolution is correct because of the overwhelming evidence.”
That’s humbling, and no politician would ever substitute “believe but not sure” for “certainly know” in any speech – that would end things fast. It would be accurate, not effective.
Burton would like to see more of it in science:
Substituting believe for know doesn’t negate scientific knowledge; it only shifts a hard-earned fact from being unequivocal to being highly likely. Saying that evolution is extremely likely rather than absolutely certain doesn’t reduce the strength of the argument, and at the same time it serves a more fundamental purpose. Hearing myself saying “I believe” where formerly I would have said “I know” serves as a constant reminder of the limits of knowledge and objectivity. At the same time as I am forced to consider the possibility that contrary opinions might have a grain of truth, I am provided with the perfect rebuttal for those who claim that they “know that they are right.” It is in the leap from 99.99999 percent likely to 100 percent guaranteed that we give up tolerance for conflicting opinions, and provide the basis for the fundamentalist’s claim to pure and certain knowledge.
Well, perhaps so – just don’t expect it in politics.
And as for the Bush theory of international relations and distinguishing between felt knowledge – hunches and gut feelings – and actual knowledge, Burton asks for the impossible – “Any idea that either hasn’t been or isn’t capable of being independently tested should be considered a personal vision.”
Right – we should have thought of that. Burton never mentions Bush, but the implications are clear.
That’s logical. But can we all accept the credo Burton offers? Try this:
Certainty is not biologically possible. We must learn (and teach our children) to tolerate the unpleasantness of uncertainty. Science has given us the language and tools of probabilities. That is enough. We do not need and cannot afford the catastrophes born out of a belief in certainty.
But what will we do with all the suddenly unemployed politicians?
But then there is the other side of the coin, the world of academia, where, if you say things are complex, everyone nods sagely, and if you say things are quite simple actually, everyone rolls their eyes and considers you a bit pathetic.
Russell Jacoby, a professor in residence in the history department out here at UCLA, introduces us to that world – for those of you who haven’t experienced the rarified and often preposterous world of graduate school at a top university. See Not to Complicate Matters, but… – in the Chronicle Review from the Chronicle for Higher Education, of course. That is a strange world. He asks one question about it – “How did the act of complicating become a virtue?”
The refashioning of “complicate” derives from many sources. One recipe calls for adding a half cup of poststructuralism to a pound of multiculturalism. Mix thoroughly. Bake. Season with Freudian, Hegelian, and post-Marxist thought. Serve at room temperature. The invitees will savor the meal and will begin to chat in a new academic tongue. They will prize efforts not only to complicate but also to “problematize,” “contextualize,” “relativize,” “particularize,” and “complexify.” They will denounce anything that appears “binary.” They will see “multiplicities” everywhere. They will add “s” to everything: trope, regime, truth. They will sprinkle their conversations with words like “pluralistic,” “heterogenous,” “elastic,” and “hybridities.” A call for “coherence” will arrest the discussion. Isn’t that “reductionist”?
This is not George Bush’s world. It’s what Burton called for, gone bad:
Cutting-edge scholars offer as the latest news these old saws: that things differ according to place and time; that our world is fractured and complex; that multiple entities constitute society. Consider the effort by the historian William H. Sewell Jr. to “clarify what we mean by culture.” After 20 pages, he triumphantly concludes that culture is “variable, contested, ever-changing, and incomplete.” In case we are deflated by that news, he adds, “I would argue forcefully for the value of the concept of culture in its nonpluralizable sense, while the utility of the term as pluralizable appears to me more open to legitimate question.” If that seems a little obvious, he adds: “Yet I think that the latter concept of culture also gets at something we need to retain: a sense of the particular shapes and consistencies of worlds of meanings in different places and times.”
What? Of course it’s nonsense. And it leads to more nonsense:
The new devotion to complexity gives carte blanche to even the most trivial scholarly enterprise. Any factoid can “complicate” our interpretation. The fashion elevates confusion from a transitional stage into an end goal. We celebrate the fact that everything can be “problematized.” We rejoice in discarding “binary” approaches. We applaud ourselves for recognizing – once again – that everything varies by circumstances. We revel in complexity. To be sure, few claim that the truth is simple or singular, but we have moved far from believing that truth can be set out at all with any caution and clarity. We seem to believe that truth and falsehood is a discredited binary opposite. It varies according to time and place. “It depends,” answer my students to virtually every question I ask.
To defend binary thinking is to invite opprobrium. It is true that fixed oppositions between good and evil or male and female and a host of other contraries cannot be upheld, but this hardly means that binary logic is itself idiotic. Binary logic structures the very computers on which most attacks on binary logic are composed. Some binary distinctions are worth recognizing, if not celebrating: the distinction, let us say, between pregnant and not pregnant, or between life and death. Others are at least worth noticing – for example, that between a red and a green light. You either have $3.75 for a latte or you do not. Can that be “complicated”?
… The cult of complication has led — to alter a phrase of Hegel’s — to a fog in which all cows are gray.
Maybe they are. You never know – they could be. Less certainty, as Burton notes, would be good, and far more honest. But one can take things too far.