The Truths We Have In Common April 4, 2018 April 4, 2018 John Michael Greer 289 Comments In recent posts here on Ecosophia.net, I've sketched out the way that the era of abstraction in which we've all grown up has foundered, following patterns that were old before our civilization was born. We've talked about the way that the abstract generalizations that started out helping to make sense of on-the-ground realities have been put to use instead to spin, distort, and conceal them; we've talked about the way that so many people in today's America have reacted to the fog of abstraction by trying to claim unearned authority for themselves or the beliefs they favor. Now it's time to start talking about what can be done about it all. Appropriately enough, that'll be easier to do if we start from a specific example rather than staying safely out in the cloudy realm of abstract generalizations, and specific examples are easy enough to find just now. There recently appeared, for instance, a newspaper article claiming, on the basis of an assortment of scientific studies, that monosodium glutamate (MSG), a widely used food additive, doesn't actually cause the headaches and other unwelcome symptoms that many Americans say it does in fact cause. Right here we have the intellectual crisis of our time in microcosm. Here are ordinary people, who have experienced something. Here are the experts, who insist that their experience doesn't exist, or at least doesn't matter, because a supposedly authoritative study by qualified experts says that the thing they experienced is purely anecdotal. Those of my readers who have been paying attention to the vexed relationship between expert opinion and popular culture in today's United States will find this interaction all too familiar. Familiar, too, is the plaintive cry of the experts: why won't people believe us when we speak the truth? What's being missed here, to begin with, is that reality is always anecdotal. I happen to know people who reliably get headaches and other unpleasant symptoms when they eat food containing MSG-and yes, it happens whether or not they know there's MSG in what they're eating, so it can't be dismissed as a product of the nocebo effect.* A great many Americans know such people; a significant fraction are such people. That awkward anecdotal reality displays the first level of the crisis of our time-if you yourself experience something, or know someone you trust who experiences something, hearing from experts who insist it doesn't exist and doesn't matter is unlikely to have any effect on your opinion about that thing-and it's very likely to have an effect, a negative one, on your opinion about experts. (*The nocebo effect? That's the placebo effect's nasty sister, which can cause people to suffer physiological harm from a harmless substance if they believe that it's toxic. Inevitably, a certain number of experts insist that it and its kindly sister don't exist.) Let's go deeper, though. On what basis did the experts just mentioned claim that MSG doesn't cause the headaches and other nasty symptoms that many people experience when they eat it? On the basis of "numerous high-quality studies." That sounds very impressive, until you take a hard look at the realities behind that label. These days there's a lot of nervous talk in scientific circles about the "replication crisis"-the fact, demonstrated over and over again in recent years, that when you repeat important experiments in a wide range of sciences, you won't get the same results as the studies that first reported on those experiments. The reason for that crisis is as simple as it is unpalatable. Ever since science stopped being an enticing hobby in which amateurs such as Charles Darwin and Gregor Mendel were welcome, and came to the attention of big government, big business, and big money generally, statistical gamesmanship, data manipulation, and outright scientific fraud have become common in many sciences and standard practice in some. (Psychology is particularly problematic in this regard-I personally witnessed repeated examples of blatant scientific fraud in psychological research in my two stints at university, and other people I know who worked as research assistants in psychology have told me stories very similar to mine-but it's far from the only one involved.) So far, so dismal-but let's take another step deeper. Why would scientists engage in the dubious practices that brought the replicability crisis into being? By and large, it's a simple matter of money. On the one hand, big business long ago came to see scientific studies as simply one more form of marketing, and so ample funding is readily available to any research team that can take the hint and turn out studies that further the interests of major corporate sponsors. On the other, in the bitter tooth-and-nail struggle for grant money that dominates the lives of academic scientists today, colorful, interesting results that pander to the prejudices of influential peer reviewers are an essential weapon, and if the data don't happen to provide such things-and very often they don't-the temptation to gimmick them until they do so is immense, and not always resisted. What makes this a matter of bitter irony is that the scientific method came into being precisely in an attempt to forestall such problems. That's why it's mandatory for a scientific study to give enough details of the experimental procedure that anyone who doubts the results can run the same experiment again for themselves, and it's also why scientific papers are normally written in some of the dullest prose ever inflicted on readers in any language-the point of this latter requirement is to keep researchers from using lively language to distract attention from faulty logic or incomplete data. In those halcyon days before science became a profit-centered affair largely sponsored by big business, these and other measures built into the scientific method did a fair job, though not a flawless one, of keeping fraud at bay. Now? They're about as useful as any other seventeenth-century security measure would be in the modern world. The collapse of trust that's rapidly eroding the ability of scientific experts to tell other people what to think, then, has deep and far-reaching roots. It's not, however, a new thing. The twilight of every age of abstraction witnesses, among other things, the collapse of trust in whatever a society's qualified intellectuals have been, and by and large that collapse of trust is earned in the same way as the present example. It's set in motion by the widening gap between the abstract generalizations that the qualified intellectuals present as reality, and the anecdotal realities that everyone else has to live with. As often as not, too, it's blatant intellectual fraud of various kinds that finishes off the age of abstraction, as people discover yet again that a sufficiently abstract truth is indistinguishable in practice from a barefaced lie. Western civilization ran headlong into such a crisis in its preindustrial days, as the richly abstract intellectual culture of the high Middle Ages slammed facefirst into a brick wall of hard realities it had serenely dismissed from consideration. The Renaissance, with its refocusing of intellectual activity away from abstract philosophical and theological speculation and toward the "more human studies" of literature, history, philology, and law, emerged out of that collision, as people across the Western world took a hard second look at the Middle Ages' casual dismissal of the achievements of the ancient world, and more generally turned away from reliance on an abstract consensus dictated by experts to embrace ways of knowing the world founded more directly on individual lived experience. One of the tools they used for this purpose, in turn, came out of the ancient Greek encounter with the same historical transition. The words used for that tool, in Greek and English, have been thoroughly whipsawed by changes in intellectual fashion, and so it's going to be necessary to do a little unpacking before we proceed. In Greek and Roman literary culture, and then again in the Renaissance, a curious practice known as the Art of Memory was a standard part of ordinary education. It's important enough that it deserves a post of its own, but the short version for now is that it involved learning to imagine places-real or imaginary-and stock those places with colorful images that encoded information to be remembered. Take a moment to imagine, as vividly as you can, your kitchen. Now imagine that in each of the four corners of that room there's an image of one of your favorite movie stars, dressed (or undressed) in a memorable way, and each movie star is holding an item that you need to get from the grocery store on your next visit. Spend several minutes making each image as detailed, vivid, and three-dimensional as possible. That's a very, very dim and simplistic version of what the inside of your head would look like if you put half an hour a day into the Art of Memory, the way most educated people did in the Renaissance. It's a curious conceit of modern thought that most people, when they encounter the Art of Memory, assume as a matter of course that it can't actually work. We've been taught systematically to ignore, suppress, and dismiss our own innate capacities, so we have to go pay to use a machine instead. As a result, the thought of developing those capacities by training and practice seems absurd to most of us, if not vaguely obscene. Nonetheless, it does work, and given a modest amount of practice it works very well indeed. (Call the image of your kitchen back to mind, with the movie stars in their places. Are the grocery items still there?) We'll get to other applications, and implications, of the Art of Memory in future posts. For now, the point that matters is that people trained in that Art tend to think of knowledge as something that sorts itself out into imagined "places." An entire field of study emerged in ancient Greek times to explore the question of what pieces of knowledge a well-stocked memory ought to have in it. The Greek word for "places" is topoi, topoi, and so the field of study was called topike, topike, or as we now say, topics (just as politike, the study of communities, became "politics"). The last feeble remnant of the word's old meaning appears when we speak of the topic of an essay or a speech, meaning the general idea or subject the essay or speech is about. To a rhetor in late classical Greece or Rome, or to a Renaissance humanist, the topic of an essay or a speech wasn't anything so vague. It was the place-literally, the imagined place in a trained memory; figuratively, the launching point for the argument-from which the essay or speech started. That starting place wasn't an abstract generalization affirmed by a consensus of experts; remember, in both the cases we're discussing, an age of abstraction had crumpled under the weight of its own failures, and the old study of topics was in part developed as a quest for some alternative. Where do you begin a discussion when there's no consensus of the experts you can rely on, no set of abstract generalizations from which answers can be found by deduction? That's the challenge that the study of topics was meant to address. The answer that became standard, both in classical times and in the Renaissance, was as straightforward in concept as it is sweeping in its implications: you start from the ideas about truth that you have in common with your audience, whatever those happen to be. In any community, however deeply riven by political, religious, or cultural strife, there are certain things that everybody accepts as generally true. The furious rhetoric of a waning age of abstraction tends to dismiss those common truths as irrelevant, or even denies their existence. (For a perfect example of how this works in practice, listen to the way that people on both ends of the political spectrum in today's American spit out hateful caricatures of their opponents' beliefs, values, and goals.) It's one of the first tasks of a rising age of reflection to identify the truths we have in common, the anecdotal experiences that most people accept as real most of the time, and use those to establish a common ground where people of good will can meet and discuss the issues that matter to them. Now of course the first thought of most people at the twilight of age of abstraction, encountering this last concept, is to go looking for an authoritative list of abstract generalizations that everyone is supposed to agree with, and try to use those as the basis for discussion. Since no such list of commonly agreed truths exists-it's exactly the absence of agreement on basic generalizations that makes the partisan conflicts at the end of an age of abstraction so bitter-you end up right back in the same mess you were in, with people beating each other over the head with dogmatic claims about this or that abstract truth, and generally going on from there to continue the beating with less vaporous instruments. The second thought of most people at the twilight of an age of abstraction is to find some more or less concrete anecdote that serves as a stalking horse for a preferred abstraction, and demand that other people acknowledge the reality and relevance of the anecdote as a way to try to force acceptance of the abstraction. Sometimes this strays very far into the territory of the absurd. In my post here two weeks ago, for example, I noted that ever since Hillary Clinton lost the 2016 election, she and a good many of her supporters have been trying to claim that in some abstract sense, she actually won it. On cue, I had one of her supporters pop up to insist repeatedly at steadily rising volume that I had to acknowledge in public, or at least in a private email, that she got a larger share of the popular vote than Donald Trump did. Since we don't decide elections here in the United States by who gets the largest share of the popular vote, this was exactly the kind of irrelevant abstraction I'd critiqued in the post, but that fact was clearly lost on my irate commenter. Demands of that kind, though, are all but universal in today's collective nonconversations. By and large, people on each side of any given controversy insist, right up front or via an assortment of passive-aggressive maneuvers, that in order to discuss the controversy with them, you have to accept whatever abstract generalization they're using to define it, or at least kowtow to whatever anecdotal claim they're using just then as a battle flag. That, in turn, lands you right back in the same predicament we've been discussing for the last couple of months, shrieking insults at people who don't accept your abstract generalizations, because you've forgotten that there's any other way to respond to a disagreement about issues that matter. (I mention these two bad habits of noncommunication because it's been my experience that when I try to make the point that's central to this week's post, a great many people immediately try to drag the conversation back to their preferred set of abstract generalizations by means of one of the two gimmicks just described. May I offer a helpful hint? If you try to do that in the comments to this post, dear reader, your comment will be deleted without mercy-unless, that is, I decide to put it through, pick it apart to show how it demonstrates the dysfunctional habits I've just critiqued, and then delete all of your attempts to respond to me. You have been warned.) The truths we have in common, as I was saying, are not abstractions but anecdotes-personal experiences most of us have had, or know people who've had. With this in mind, let's cycle back to the newspaper article mentioned early on in this week's post, the one claiming that MSG doesn't cause the headaches and other nasty symptoms that, to many people, it does in fact trigger. What's the anecdotal reality here? The fact that Aunt Mildred, let's say, gets a headache any time she eats food containing MSG. That's an anecdote, sure, but it's as real as anything can be for your Aunt Mildred, and for the members of her family who end up with Aunt Mildred huddled on the couch in pain because somebody forgot to read the label on the snack crackers they served up at a family get-together. There are plenty of other anecdotal realities, to be sure, and many of them are different from Aunt Mildred's. It may well be, for example, that for every reader who has an Aunt Mildred who gets a headache from MSG, there are three other readers whose entire families can wolf down MSG-laden foods by the plateful without turning a hair. That anecdotal reality can perfectly well coexist with poor Aunt Mildred, even though the abstract generalizations "MSG causes headaches" and "MSG doesn't cause headaches" contradict each other. If we stay stuck on the abstractions, that contradiction can't be avoided. If we skip the abstractions and stay in the realm of anecdotal reality, on the other hand, there's room for many differing experiences-in the words of the Zapatista rebels of southern Mexico, we can have a world in which many worlds fit. There's a truth we can have in common here, which is that some people have food sensitivities and some don't. If we happen to be planning a party to which Aunt Mildred and the MSG-ovores are all invited, furthermore, we can keep that shared truth in mind, balance the anecdotal realities against one another, and make sure (for example) that someone reads the labels and correctly identifies which foods Aunt Mildred can eat without ill effects and which she needs to leave alone. I've deliberately chosen a simple example here, but the same logic can be extended to things that are far from simple. Two weeks from now, we'll go further in the same direction, and talk about another concept that's been wrenched out of its original context and stripped of most of its meanings: the concept of the commonplace.