Good Faith by Nick Youngson CC BY-SA 3.0 Alpha Stock Images
I’ve been wanting to explore this topic for a few years now, and I need to start somewhere, but I don’t have the bandwidth just now to do more than juxtapose a few things.
The overarching problem sometimes goes by the need for media literacy, or news literacy, or (alas) “critical thinking,” the latter a phrase I always struggled with and now feel little more than exasperation over. What’s wanted, I think, is for people
- to be able to reason well,
- to ask pertinent questions,
- to be skeptical when that’s warranted and trusting when that’s warranted (because communities are ultimately based on elements of trust),
- to be self-aware enough to practice a certain kind of mindful self-correction at all times, if only at a low level.
The list is not exhaustive. (I warned you about my bandwidth.)
It’s worth noting that every single one of the items in the non-exhaustive list above is the site of not inconsiderable controversy. “Reasoning well” has been problematized on grounds of “reasoning” and criteria for evaluating “well.” What makes a question “pertinent” and who’s to say? (“Who’s to say?” is a frequently combative way of raising the problem of evaluation.) When is skepticism warranted and when is trust warranted? (See Othello, to cite one example, for a fascinating case study.) And by what magic properties are we able to transcend our own biases (or cultural contexts, or whatever one terms the determinisms) to be “self-correcting”?
I myself do not believe it’s turtles all the way down, but I’ve met people who’ve argued the opposite and seem unaware that turtles all the way down is a conversation-stopper, not a reasoned argument about axioms in reason.
And media literacy? Some essential reading here from danah boyd.
It’s one thing to talk about interrogating assumptions when a person can keep emotional distance from the object of study. It’s an entirely different thing to talk about these issues when the very act of asking questions is what’s being weaponized. This isn’t historical propaganda distributed through mass media. Or an exercise in understanding state power. This is about making sense of an information landscape where the very tools that people use to make sense of the world around them have been strategically perverted by other people who believe themselves to be resisting the same powerful actors that we normally seek to critique.
True story: I once argued with a colleague for an hour about the question of truth. He was a sort of everything-is-provisional, who’s-to-say, truth-claims-are-dangerous kind of arguer, so I wasn’t making any headway until it occurred to me to ask if he thought one could tell lies about the Holocaust. He thought a moment, and said yes, of course. To which I replied, then by extension it must be possible to tell truths about the Holocaust.
The point here is that today we seem to be able to problematize everything, often in a vigorous effort to stay on the run from dogmatic thinking (usually, but not always a good thing), and thus the very idea of combatting disinformation, let alone misinformation, becomes itself weirdly weaponized, as I saw over and over again with a small set of former high-school classmates during the prior administration.
So step-by-step methods such as SIFT seem to me to presuppose widely shared standards of reasoned approaches to informing oneself about the world, and I’m not sure that presupposition has ever been valid. I’m certain it’s not now. And I’m certain that some set of assumed truths must precede the operation of reason–not that that idea is original with me. (See danah boyd, above.)
At the same time, part of the operation of reason is to demonstrate limits to the operation of reason, including but not limited to the lack of conclusive evidence. This I take it is a sign of humility and evidence of good faith, as Jon Udell writes:
Here’s evidence that acknowledgement of uncertainty really is a powerful signal of credibility. Maybe machines will be able to detect it and label it; maybe those labels will matter to people. Meanwhile, it’s something people can detect and do care about. Teaching students to value sources that acknowledge uncertainty, and discount ones that don’t, ought to be part of any strategy to improve news literacy.
Part of this idea is the adage that “if it seems too good to be true, it probably is,” where too-good-to-be-true means “rest your weary head because there’s one answer and you have it.” (But what an interesting saying: “too good to be true”–as if anything more than a little bit of okay is likely hollow at the core.)
Another part, though, lines up nicely with the Walker Percy quotation I included in a post several days ago:
The technician and the sophomore who loves his textbooks are always offended by the genuine research man because the latter is usually a little vague and always humble before the thing; he doesn’t have much use for the equipment or the jargon. Whereas the technician is never vague and never humble before the thing; he holds the thing disposed of by the principle, the formula, the textbook outline; and he thinks a great deal of equipment and jargon.
(My earlier note about Percy’s androcentric language applies here as well.)
It doesn’t take long, though, to see that acknowledging uncertainty can lead to cascades of turtles, especially where there are highly specific outcomes that are easily tracked and, in many cases, readily verified (did they live or die? did I pass or fail? did it happen or not?). Too much acknowledgement of uncertainty begins to feel like evasion, as our epidemiologists have found to their sorrow (and ours).
Then come the virologists who seem to make all the above completely beside the point, as they engage, like Milton’s Belial, with “words clothed in reason’s garb.” This 2015 New Yorker article is essential and difficult reading:
On this occasion, I was the only person listening to his speech, but he spoke in a distant and deliberate tone, using studied pauses and facial expressions, as if I were a video camera’s lens. When he got to the part about virality being a superpower—“I realized that if you could make ideas go viral, you could tip elections, start movements, revolutionize industries”—I asked whether that was really true.
“Can you rephrase your question in a more concrete way?” he said.
I mentioned “Kony 2012,” a thirty-minute film about the Ugandan militia leader Joseph Kony. It has been viewed on YouTube more than a hundred million times, but it did not achieve its ultimate goal: Kony remains at large, as does his militia, the Lord’s Resistance Army.
“To be honest, I didn’t follow too closely after the whole thing died down,” Spartz said. “Even though I’m one of the most avid readers I know, I don’t usually read straight news. It’s conveyed in a very boring way, and you tend to see the same patterns repeated again and again.”
He went on, “If I were running a more hard-news-oriented media company and I wanted to inform people about Uganda, first, I would look it up and find out exactly what’s going on there. Then I would find a few really poignant images or story lines, ones that create a lot of resonant emotion, and I would make those into a short video—under three minutes—with clear, simple words and statistics. Short, declarative sentences. And at the end I’d give people something they can do, something to feel hopeful about.”
This apparently admirable rhetorical advice, the kind of thing one might encounter in a freshman composition class, turns out to be in the service of clickbait, and the narrative of “The Virologist” finds its climax in the infamous New York Times report on how to stay afloat (successful, impactful, important, profitable, relevant, relatable, etc.) in the digital age. You may recall that the report caused a bit of a stir at the time.
In March, a working group at the Times presented an internal report to the paper’s top editors. A few weeks later, the report was leaked, and BuzzFeed published it. The first sentence was “The New York Times is winning at journalism.” However, it warned, “we are falling behind in a second critical area: the art and science of getting our journalism to readers.” Virality, in other words. The report’s authors argued that sharing and promotion should not be seen as a “chore”; on the contrary, “watching a year-old story go viral on social” could be “truly exciting.”
Old-media loyalists were troubled by some of the report’s recommendations. The metaphorical “wall” separating editorial staff and business staff, long considered an axiom of journalistic ethics, was cautiously called into question. Yet traditionalists might not have recognized how good they had it. The report repeatedly distinguished the Times’ core mission—“winning at journalism”—from more easily quantifiable goals, such as winning at page views. In our data-obsessed moment, it is subversive to assert that the value of a product is not reducible to its salability.
When I e-mailed Spartz to ask about the report, he said that he hadn’t heard of it. After skimming it, he wrote that it seemed like too little too late: “Nothing struck me as being particularly eye-opening, just confirmed my suspicions about how far they are behind the . . . Times. (Sorry.)”
The report acknowledged a “tension between quality control and expanded digital capabilities.” Spartz experiences no such tension, because he does not distinguish between quality and virality. He uses “effective,” “successful,” and “good” interchangeably. At one point, he told me, “The way we view the world, the ultimate barometer of quality is: if it gets shared, it’s quality. If someone wants to toil in obscurity, if that makes them happy, that’s fine. Not everybody has to change the world.”
Spartz does not call what he makes journalism, even if he employs a few journalists, and he does not erect barriers between his product and his means of promoting it. Asked to name the most beautiful prose he had read, he said, “A beautiful book? I don’t even know what that means. Impactful, sure.”
Now you can see that I was unfair when I compared the virologists to Belial. Their words are not clothed in reason’s garb. Their words are reasonable. They simply start from a vastly different set of assumptions about what constitutes human flourishing, what is the common good, and how then must we live.
One last item in the series is Wikipedia’s dicta that editors should “assume good faith” and strive to write from a “neutral point of view.”
Assuming good faith (AGF) is a fundamental principle on Wikipedia. It is the assumption that editors’ edits and comments are made in good faith. Most people try to help the project, not hurt it. If this were untrue, a project like Wikipedia would be doomed from the beginning. This guideline does not require that editors continue to assume good faith in the presence of obvious evidence to the contrary (e.g. vandalism). Assuming good faith does not prohibit discussion and criticism. Rather, editors should not attribute the actions being criticized to malice unless there is specific evidence of such.
All encyclopedic content on Wikipedia must be written from a neutral point of view (NPOV), which means representing fairly, proportionately, and, as far as possible, without editorial bias, all the significant views that have been published by reliable sources on a topic….
NPOV is a fundamental principle of Wikipedia and of other Wikimedia projects. It is also one of Wikipedia’s three core content policies; the other two are “Verifiability” and “No original research“. These policies jointly determine the type and quality of material that is acceptable in Wikipedia articles, and, because they work in harmony, they should not be interpreted in isolation from one another. Editors are strongly encouraged to familiarize themselves with all three.
This policy is non-negotiable, and the principles upon which it is based cannot be superseded by other policies or guidelines, nor by editor consensus.
Both of those principles have been widely contested, even mocked–I got in a brief but intense Twitter back-and-forth on the NPOV principle at the 2019 WikiNorthAmerica conference–and both are, I think, essentially related to the problems and problematizings I’ve sketched out above. I also believe that both principles are vital to the success of any human endeavor, and never more bewilderingly elusive in a hall-of-mirrors way than now.
Lest you think I have become completely untethered, or am perhaps bouncing on every carapace on a joyride down that lovely series of turtles, I should say that I encounter and wrestle with each of the items above on a daily basis in my teaching, my research, and my writing. I understand that may be more the case for folks who work in the humanities, as I do, than in the physical sciences. But that’s not an answer to any of these questions.
I have some additional thoughts but that’s the bandwidth for now. All I can say in my own defense is that I go through these items, these questions, because I am trying to keep my thinking straight, or as straight as possible. If I end up sneaking in a version of an assumption I am apparently inquiring about, I would like to be able to detect it, admit it, and ameliorate the situation enough to take another step or two. I am not confident I can do that, but I am confident that I must try, and I am confident that I must believe it is possible to do so lest the effort be anything other than absurd.
I also find it all kind of fascinating, and every now and then I’m in a conversation that seems to me, for at least a while, to be built out of good faith and a scrupulous attention to the conditions and progress of the argument. But then someone says “and you know, higher education is a business,” and I drown.
Are we having fun yet?