Chapter 9: Credibility
9.3 The Paradoxes of Credibility
Credibility is power: imagine being so credible that if you say it, people know it’s true. If you’re a detective, for example, and you identify a murder suspect, the police will put them in handcuffs based on your word alone. Picture being Sherlock Holmes; your reputation is so solid that everyone knows how brilliant you are, trusts you, and gives you free reign to do what you want. Who doesn’t want that kind of power?
Well, for one: Lieutenant Columbo.
The Columbo Effect. Columbo is a fictional television detective played by Peter Falk in 69 episodes of Columbo from the 1970s to the early 2000s. In manner and communication style, he’s the complete opposite of Sherlock Holmes, who constantly tells everyone he’s smarter than they are. Columbo is always scratching his head and asking people to help him figure out things he can’t figure out on his own. In episode after episode, the murder suspect is lulled into complacency by thinking that the detective on the case is a fumbling idiot, usually realizing they’re wrong only at the end of the show, when Columbo pins them down with irrefutable proof. The last thing Columbo wants is for people to think of him as a genius.
Why? He can see the advantages of people letting their guard down, which probably nobody does around Sherlock Holmes. Low expectations bring a lot less pressure than high ones (it must be tough to be Lionel Messi or Simone Biles, hearing people call you the GOAT[1] all day long). And both Sherlock Holmes and Columbo are fictional detectives who always solve the case, but imagine if it were the real world and they failed sometimes. “SHERLOCK HOLMES WAS WRONG” would be a front page headline, but “A person who everybody thought wasn’t very good at their job turned out to be not very good at their job” isn’t a headline at all. Isn’t it much nicer to have people say “Huh, they’re smarter than they look” than to say “I guess I overestimated them”?
That’s why real-life radio hosts Ray and Tom Magliozzi (also known as “Click and Clack, the Tappet Brothers”) were so fond of self-deprecating humor on their show Car Talk, which ran from 1977 to 2012. They were constantly telling everyone how stupid they were and finishing their show with the line “Well, you wasted another perfectly good hour listening to us.”
Do you know anyone who constantly relies on self-deprecating humor? On the surface it sounds like they’re attacking their own credibility, but they’re really building up that credibility. The Magliozzi brothers were actually the most highly paid performers on public radio for a while, and people trusted their expertise — probably more than they would have if the brothers bragged about being the world’s greatest car experts. You can compare it to a weather forecaster who errs on the side of pessimism: if you predict rain and it does rain, you’re safe; if you predict rain and it turns out to be a beautiful sunny day, people don’t mind so much. Likewise, if you go around telling people you’re wrong and you are wrong, at least you called it accurately, but if you turn out to be right, everyone’s pleasantly surprised. There is a growing body of academic research on “reverse credibility” that explores this topic in more depth.[2]
Of course, if you’re going to use a lot of self-deprecating humor, you have to accompany it with actual results; it’s important that Columbo solved his cases, and that the Magliozzi brothers gave good car advice. If you go around making jokes about how stupid you are and people nod in agreement, you’re in big trouble.
That is just one paradox related to credibility. Others have to do with the relationship between the core dimensions of expertise and trustworthiness. In ideal situations, the two dimensions work together: when I go to a doctor, I want to know that they are knowledgeable and that I can trust them. As noted, sometimes one dimension is lacking, but if both are present, it all works out well, doesn’t it?
Not necessarily. There are reasons why the dimensions clash with each other, and being high on one dimension actually makes it harder to be high on the other one. How can this happen?
Let’s start by looking at the question of how you develop expertise. It usually comes from spending years and years studying a subject. Take someone who knows everything there is to know about dolls: they can recite the details of every doll manufacturer going back two centuries, they know what materials were used to make the skin and hair, they know what kind of dolls were popular at different times and how much they cost. But there’s one question that no one ever stops to ask of such an expert: “Do you like dolls?” Of course they do! Since they devoted so much of their lives to the subject, we can safely assume that they are “passionate” about it.
But there’s a problem with the word “passionate.” It implies strong feelings, strong opinions, and strong motivations, and there’s another word we can use for that: bias. This raises the question: does that doll expert have any biases that get in the way? And where does the word “biased” appear in the two-dimensional chart? At the bottom of the trustworthiness dimension, i.e., “lacking credibility.” I call this the Paradox of Passion: the more knowledgeable you are on a subject, the higher the risk that you will be perceived as biased.
If the doll example seems abstract, consider other real life examples:
- After the President of the United States gives a State of the Union speech, the tradition is for other politicians to follow it up with commentary speeches. It makes sense for politicians to comment, because they know politics as well as the president does. But is anyone surprised that the commenter who belongs to the same political party as the president always thinks the president did a great job with the speech, and the person who belongs to the opposing party thinks the speech was terrible? The bias is so strong and obvious that many people consider those follow-up speeches a waste of breath. But how can you find an expert on politics who isn’t biased?
- In 2008–2009, the United States faced an economic crisis that was caused in large part by the financial industry. It was clear that the government needed to step in to avert an even worse disaster, and that required them to appoint specialists to “clean up the problem.” But who could they appoint? In the end, they appointed a lot of the same people who caused the problem in the first place, because they were the only ones who understood the complex financial systems that needed fixing. There weren’t enough people who had the necessary expertise but who weren’t directly involved.
- Earlier I mentioned the credibility of informants, who are often people with trustworthiness problems. As one example, in the 1990s in Italy there was a series of trials against the Sicilian mafia, and they rested heavily on key witnesses known as “pentiti” who described what goes on in the mafia. The problem? They were all former mafia members themselves, so while their expertise wasn’t questioned, their motives were. How can you trust a “rat”? But who else could testify?
- In the American justice system, a spouse cannot be compelled to testify against their partner, even though they know the partner, their character, and their goings on better than anyone. Let’s say a woman is accused of murder and her spouse could provide the alibi: “She spent the night at home with me.” Even if the partner did testify, would anyone believe them? They have crucial knowledge, but they’re married to the defendant, which means they have a motive to either protect them or get rid of them. The same applies to civil trials. A son suffers brain damage from a collision, and needs witnesses to testify about his cognitive abilities before the injury. The mother knows her son better than anyone, but is not a credible witness because of the family relationship.
What is the solution to the Paradox of Passion? How do you find someone with knowledge but without bias? The solution that many people yearn for is someone who is neutral — an expert with no “skin in the game.” Yet neutrality is a difficult thing to pull off: no matter how careful and even-handed a person is, someone is going to accuse them of bias. Fact-checking websites like Factcheck.org or Politifact.com devote themselves to evaluating, without political bias, the truth of statements made by public figures, yet the sites are constantly accused of having a slant one way or the other. Judges, mediators and arbitrators, news reporters, auditors, inspectors: there are many people whose job requires them to be impartial and neutral, and society depends on them, but it’s an ideal to strive for, and even these people themselves recognize that they can never perfectly achieve it.
The clash between credibility dimensions also presents itself in another way, which I call the Similarity vs. Superiority Paradox. It’s really about comfort and trust, and the differences between people in where those feelings come from. For some people, the feeling of trust comes from knowing that someone else is better than they are at something, probably because of higher education. When I go to the doctor, I’m glad they studied medicine for years and know a lot more about medicine than I ever will. If they tell me something that goes against my preconceived notions, my tendency is to follow what they say because their knowledge is superior to mine. I suspect that this is a universal impulse; in order to hold the opposite attitude, I would have to believe that I know everything about everything, which takes an awful lot of confidence.
Yet there is a counterbalancing force, and I suspect that it’s equally universal: the tendency to trust people who are like you. If you look like me, talk like me, and share my attitudes, of course I’m inclined to feel an affinity with you. People have recognized the downsides of this: it’s a form of xenophobia that leads to racism and prejudice and faulty assumptions about who you can trust and who you can’t (see the discussion of the border crossing scene from No Country For Old Men in Chapter 6). For that reason, educators have recognized that it takes effort to achieve the ideal of equity, diversity, and inclusion — but have also recognized that trusting others who are like us is the default starting point.
Which impulse is stronger? If comfort from the knowledge that others are smarter than you is the stronger force, you’ll be inclined to trust experts. If trust in others who are like you is the stronger force, you might come to mistrust experts because they aren’t “like us”: they’re snooty, egg-headed, ivory tower elites you can’t relate to.
One curious survey question that pollsters asked during the 2000 U.S. election season, when Al Gore ran against George W. Bush, was dubbed “the beer question”: “Who would you rather have a beer with?”[3] The assumption was that voters wanted a president they could imagine hanging out with — that voting was based partly on “relatability.” The fact that Gore didn’t score very well on that question and that his successor John Kerry scored even lower may help explain why Bush was president for eight years. Or at least, it explains why presidential candidates ever since seem to love having their picture taken without a tie, eating fast food; you don’t get elected sending the message “I’m far superior to you.” Megan Garber, however, examines the other end of the scale: “The beer question, after all, is the wrong question to ask. Do we really want a leader who is on our level — or is it better, actually, to have a leader who is demonstrably above us? My money’s on the latter.”
This paradox also shows up in other situations, such as the classroom. Many pedagogical sources tout the value of discussion classes, and argue that they are much better than the old model of a “sage on the stage” professor just delivering a lecture. (These sources might also discourage teachers from using words like “pedagogical” and “tout” when talking to students, since it sounds like showing off their fancy vocabulary). Not often acknowledged, however, is that discussion-based classes deny students the opportunity to hear what the teacher — presumably the most informed person in the room — has to say. A colleague of mine was looking forward to taking a seminar from a well-known professor, but was disappointed with the result: “The professor barely said a word, and instead I spent most class time just listening to my classmates spread their ignorance around — what a waste!”
Also consider the question of movie reviews: which opinion would you rather hear — that of a professional critic with a degree in film studies, or a review from an ordinary movie viewer like you? In some cases, there is a wide gap between the opinions of both camps, so which one matters more to you? I put more weight on the critics, perhaps because I’m an academic myself, but my children just read customer reviews instead. This seems to coincide with a general trend toward preferring amateurs over professionals in fields such as news (where you might prefer getting your news from a blogger instead of a large news organization), hospitality (where you’d rather stay at the home of a stranger renting their place out than at a hotel), and wedding officiating (get your friends to do it rather than hiring a minister or justice of the peace). There are obviously many factors involved in such trends, especially cost, but one of them is the fun of skipping all the strings that come along with professional affiliation and finding someone who is just doing it from the heart.
- If you’re not familiar with it, that acronym stands for “Greatest Of All Time.” ↵
- Examples of academic articles on reverse credibility include: ● Bohner, G., Ruder, M., & Erb, H.P. (2002). When expertise backfires: Contrast and assimilation effects in persuasion. British Journal of Social Psychology, 41, 495-519. ● Ballman, Tara & Kanady, Ken. (2006). Reverse Credibility. Jawbone Publishing. ● Tormala, Z.L., Brinol, P. & Petty, R.E. (2006). When credibility attacks: The reverse impact of source credibility on persuasion. Journal of Experimental Social Psychology. ↵
- Seifert, Erica J. (2014). The Politics of Authenticity in Presidential Campaigns, 1976–2008. United States: McFarland Publishing. ISBN 9780786491094. ↵