Ruben Mersch – Why Everyone is Always Right 




Ruled by the gut

A brief history of gut instincts

Moral tribes

Own reality first

Burying the hatchet – a manual

In conclusion


Further reading





‘The first principle is that you must not fool yourself – and you are the easiest person to fool.’ – Richard P. Feynman

One of the most memorable arguments between me and my wife was about basil. More specifically the question of whether or not my wife had asked me to save half the basil plant for her tomato soup. I’d stirred the whole thing, down to the last leaf, into my salsa verde, and I was certain she’d voiced no objection. Unfortunately she was equally sure of her ground. The quarrel escalated. Soon our disagreement was not merely about who had said what concerning that particular kitchen herb but about our personalities (‘you never listen’ versus ‘you always get worked up about trivialities’) and, ultimately, the viability of our relationship.
It’s a pattern not confined to marital disputes. Do we need more Europe or less? Is Islam peace-loving or the source of all evil? Are genetically manipulated organisms safe and useful or do they cause cancer and drive poor farmers to suicide? Is the earth getting hotter as a result of human activities or is that a myth dreamed up by dipsy-green tree-huggers? These are also arguments in which two camps quickly emerge, both firmly convinced that the facts manifestly point in one direction only: that which shows them to be indubitably correct. And these disputes too have an unfortunate tendency to escalate rapidly. Soon mud is being thrown with gusto and before you know it someone comes up with the brilliant idea of comparing the opposing side to a bunch of brownshirts from the 1930s.
When others refuse to believe what you believe there are three possibilities. Perhaps your opponents are poorly informed; if only they had all the relevant information they’d inevitably see that you’re right. Or perhaps they do have the necessary knowledge but are sadly too stupid to understand it. Or, the final possibility, they’re intelligent enough and have access to everything they need to know, only they are of evil intent. They’re bad people and they deliberately twist the facts to prove themselves right.
When I started writing this book I backed the first of the three. If everyone knew what I knew, they’d see that my conclusions were correct. So I wanted to provide a nuanced and fact-based description of what we know and what we have yet to find out. I’d sort the wheat from the chaff and enable people to see the wood as well as the trees. I’d show how, even if the path is often slippery and full of obstacles, you can find the truth at the end of it. As soon as I’d completed that task everyone would see how things stand. Once correctly informed, we’d be able to lay aside our differences of opinion and live happily ever after. As for those who even then didn’t see I was right, they were obviously too stupid, or exhibiting ill will.
I didn’t write that book. I started to. Bits and pieces of the things I thought about and wrote down eventually ended up somewhere in the book to which you are now reading the introduction, but most of the material that flowed from my pen at that time is gathering dust in a distant corner of my laptop. Writing is all about what to leave out.
I’m a prophet with a pen. My aim in writing a book is to convince people. And suddenly I’d started questioning whether that aim was achievable.
Understand climate change. That was my first goal when I started my new book. So I read about radiative forcing, thermohaline circulation and the carbon cycle. I wanted to weigh up all the scientific evidence in order to convince my readers that we humans really are responsible for the warming of the earth. But I have a restless mind, so in no time at all I was reading not only about global warming but about an eighteenth-century preacher who wrote an article on billiards, about the courtship ruses of the black grouse and about duck penises (sadly I was unable to use the latter in my book, but just look up ‘duck penis’ on YouTube and marvel). During one of those diversions I came upon some research by an American law professor called Dan Kahan.
When I started writing this book I believed that if people only understood the science behind climate change, for example, they’d see that we really do have a problem. Knowledge brings insight. That was exactly the hypothesis that Dan Kahan wanted to test. Does greater knowledge of science lead to a higher estimate of the risk of a warming planet? Is it because they lack understanding of the science of climate change that people are failing to take measures to prevent it? To answer those questions Kahan gave participants in an experiment a standard test to measure their scientific knowledge. He then asked to what degree they were worried about the warming of the earth. The result was surprising. Knowledge made little difference. People with more scientific baggage were slightly more concerned than those without, but the difference was pretty minimal. Strange. It got even stranger when Kahan grouped his test participants according to their norms and values. Egalitarian communitarians (roughly speaking those who believe that the government should be able to set limits to the free market, in other words American Democrats) became more concerned according to how much they knew about the science behind global warming. But hierarchical individualists (American Republicans with an unshakeable faith in the free market) demonstrated the opposite effect: the more knowledge they had, the less worried they were.
If a discrepancy in factual knowledge lies at the root of our arguments about global warming, GMOs or pesticides, then our views should converge as we acquire more knowledge. If we can just get to know enough, all of us will eventually understand what’s going on. Kahan discovered precisely the opposite. More knowledge doesn’t lead to convergence but to polarisation. And the source of our quarrels is not a difference in knowledge but a difference in morality.
So I was back where I’d started. My book would not achieve its aim. People who saw things the way I did would nod in agreement. People who had a different point of view would not be convinced, in fact they’d only become more convinced they were right. Instead of oil on troubled waters I’d be pouring oil on the fire.
I also started to doubt myself. Was I so very different from the participants in Kahan’s experiment? Like more or less everyone walking this earth, I believe that my opinions are based on solid foundations. But are they? Don’t those foundations in fact consist of my own norms and values, and isn’t my knowledge purely used to defend them?
‘But don’t you know about this study?’ ‘Let’s have a look. Well, clearly you’re right. I’ll have to change my view.’ ‘Wait a moment, I believe I can see a hole in your argument.’ ‘Let me think. Yes, I am indeed wrong.’
That’s how our debates would go if they concerned facts alone. At most there would be a difference of opinion about norms and values once all the facts had been clarified. You might find the wellbeing of future generations more important, whereas I give priority to the wellbeing of today’s generations. I might find a risk of one in a million too small to worry about whereas you don’t. With a bit of luck we’d find a compromise on those matters too, or agree to disagree. Dispute resolved. No hard feelings. If you ever read a newspaper or hang out on an internet forum you’ll know that debates along such lines are sadly as rare as overweight playboy models.
Is the agro industry slowly poisoning us all, or are pesticides safe and thoroughly tested? Are refugees a blessing for our economic prosperity or the beginning of the end of our civilization? Are the Palestinians the goodies and the Israelis the baddies or the other way around? Should troublemakers keep their paws off old Dutch traditions or is the figure of Black Pete an instance of undiluted racism? Why does everyone think they’re right about these and many other issues and the other side is wrong? Answering that question became my new goal. Guided by Kahan’s insight, I started by looking at morality.
In a dim and distant past I studied philosophy. One important element of the course was ethics. You can base your ethics on a supernatural recipe book. For example, the Bible says you mustn’t covet your neighbour’s wife or wear clothes made from two different fabrics. As students we didn’t read the Bible, but we did read Kant, Bentham, Rawls and others, all of whom made a profession out of thinking about morality. From Kant, for example, you learn that you must act as if the maxim at the root of your action can be elevated to the status of a general law. According to Bentham you must act in such a way as to maximize the total amount of happiness in the world and minimize the total amount of pain. Unfortunately I’d forgotten almost everything after so many years. Of all the insights I’d accumulated during day-long cramming sessions, little was left beyond a vague executive summary. And what I did still remember – that Kant took a daily walk of exactly an hour after lunch, or that Wittgenstein didn’t mind what he ate as long as it was the same every day – didn’t seem terribly relevant.
Happily there was no need to struggle through Kant’s Groundwork of the Metaphysics of Morals again. All those philosophers describe how people ought to think. I wanted to know how people, creatures of flesh and blood, actually do think. Not: is the death penalty necessary or fundamentally wrong? But: why are some people in favour of the death penalty and others opposed to it?
So I started reading again. I read articles by neurologists who had slid people into MRI scanners and asked them to resolve ethical dilemmas. Books by anthropologists who described how tribes that had had almost no contact with Western civilization thought about moral issues. I read evolutionary biologists who were attempting to find out how our morality and our altruism can be reconciled with natural selection, and sociologists who had examined how norms and values differ between cultures. I even did little experiments on my daughter (Ada, five) to find out how her moral sense was developing. (For the concerned reader: no children or animals were harmed in the writing of this book.)
But even when I began to understand how our morality develops, I still had only one piece of the puzzle. Because how do those norms and values influence our views? As a student of philosophy I learned that there is an impenetrable wall between facts on the one hand (Do GMOs cause cancer? Does austerity lead to economic growth?) and values and norms on the other (Are we responsible for the wellbeing of future generations? Should people be rewarded according to their efforts?) From a description of how things are, you can’t deduce how they ought to be or vice versa. Violating that rule was a cardinal sin. In reality, I now discovered, we sin all the time. We throw together facts and norms in our thinking and turn on the liquidizer. Once blended, the ingredients are almost impossible to separate, just as they are in soup. Which explains why people on either side of a debate often live in their own reality. They each have their own facts that – surprise surprise – perfectly support their views. Everybody is always right.
Hardly an encouraging conclusion. So I went in search of solutions. Perhaps science, even if it is often used purely as a weapon in moral tribal warfare, nevertheless settles some debates. Perhaps there are elements of science that are too hard to liquidize, less susceptible to blending with norms and values.
With science alone we won’t get where we need to be, unfortunately. Absolute certainty can’t be found even in science and there are many questions to which science can’t provide the answers. But we are all living on the same planet, so we need to get along with one another. We’ll have to learn to have more constructive differences of opinion, to try to understand our opponents even if we can’t agree with them. I hope my book will make a contribution. But let’s start at the beginning. With cockroaches.



Ruled by the gut

On cockroaches and ethics
Take a large bowl. Place in it a handful of cockroaches. Add some worms and slugs. Get a spoon, stick it into the wriggling mass and take a large mouthful. Feel the cockroaches crunching between your teeth as your mouth fills with slug mucus, while half-chewed worms wriggle over your lips. Delicious!
As you were reading this you probably frowned, screwed up your nose and closed your mouth. That’s the standard ‘yuck’ face pulled by people all over the world when confronted with something disgusting. Some among you may have felt slightly sick. Sorry about that. But as a result of reading this disgusting paragraph something else has changed too: your moral judgement.
One morning in 2008 Alex Jordan, a colleague of psychologist Jonathan Haidt, went to stand at a pedestrian crossing.[1] He had a spray with him that, according to the website where he bought it, ‘was developed by well-known scientists following the consumption of chili and beans’. It was designed to release hydrogen sulphide, a major component of the gas emitted by our rear ends. Fart spray, in other words. When he reached the crossing he asked passers-by a number of questions. Should first cousins be allowed to marry? Should you be allowed to broadcast a documentary in which the interviewees did not agree to be interviewed? Is it wrong to drive to your workplace if it’s so close you could easily walk there? Before half of these encounters, Alex sprayed the area liberally, without any of the participants noticing, so that his questions were answered in a cloud of fart smell. He spoke to the other half only after the smell had dissipated. When he and his colleagues analysed the results later, the conclusion was obvious. The smell unquestionably had an influence. Test participants surrounded by the ‘strong stink’ were harsher in judging moral transgressions.
Scientists have used all kinds of methods of making people feel disgusted. You can show test participants that scene in Trainspotting where Renton dives into an unflushed toilet, or interview them in a filthy room full of used tissues and half-eaten pizzas, or confront them with photographs of suppurating wounds. In all these experiments the conclusion was the same: the more disgusted people are, the more severe their moral judgements. The effect is not confined to judgements concerning situations that are thought revolting in themselves by some people, such as homosexuality or incest; it also takes effect with moral dilemmas that have no apparent connection with disgust. People are more likely to regard a proposed way of sharing out a sum of money as unfair if you make them feel disgusted first.[2]
It works even without used tissues, nauseating photographs or fart spray. You can hypnotise people so that they experience a sense of disgust when they hear a specific word.[3] Jonathan Haidt and Thalia Wheatley tried this. They hypnotized their test participants in such a way that half were disgusted by the word ‘often’ and the other half by the word ‘take’. They then presented a number of vignettes such as the following: ‘Congressman Arnold Paxton frequently gives speeches condemning corruption and arguing for campaign finance reform. But he is just trying to cover up the fact that he himself will take bribes from the tobacco lobby, and other special interests, to promote their legislation.’
Participants were asked to say how wrong they felt the Congressman’s behaviour was. Other participants were told the same story but with a minor difference. Instead of ‘will take bribes from’ the story was worded ‘is often bribed by’. The content was otherwise identical, but the presence of the disgust-inducing word made a significant difference. On a morally-reprehensible scale of 0 to 100, the Congressman scored 78 if the disgust-inducing word was not included, 92 if it was. The same went for the other vignettes, which involved shoplifting, eating a dead dog and stealing books from the library. The presence of the word that induced a sense of disgust abruptly caused the test participant to judge the behaviour to be considerably more immoral.
As a control, Haidt and Wheatley added a vignette in which no moral norm was violated at all. It was about a man called Dan who was responsible for organizing debates between students and professors. Here too there were two versions of the story. One version said that ‘Dan tries to take topics that appeal to both professors and students’, while the other version was that ‘Dan often picks topics that appeal to both professors and students’. Most test participants felt there was nothing wrong with Dan. But if the disgust-inducing word was present, a third of them felt that Dan was nevertheless not completely kosher. They even managed to justify that opinion. One said Dan was a snob, merely out to increase his own popularity, while another considered Dan untrustworthy and believed he secretly harboured nefarious plans.
If our differences of opinion – about the safety of pesticides, the role of Islam in terrorist attacks or whether or not the Black Pete character is insulting – are based on differences in morality, then in order to understand our differences of opinion we first need to inquire about the basis of our moral values. For centuries human morality has been seen as something exalted. It was what made us unique, the characteristic that distinguished us from the other creatures populating our planet. The basis of our moral judgements, people believed, was no less exalted and uniquely human: the judgement of a Divine Being according to some; our sublime human reason according to others. Research involving disgust calls both views into question. Isn’t it rather odd that our sublime morality can be manipulated by means of prosaic phenomena such as fart spray or cockroaches? How can our norms and values be influenced by something we are far more likely to associate with excrement and rotting flesh: disgust?
Translated by Liz Waters


[1] Simone Schnall, S., Haidt, J., Clore, G.L. & Jordan, H. (2008). ‘Disgust as Embodied Moral Judgment’, Personality and Social Psychology Bulletin, August, 34, pp. 1096-1109.


[2] Moretti, L. & Di Pellegrino, G. (2010). ‘Disgust selectively modulates reciprocal fairness in economic interactions’, Emotion, vol. 10 (2), April, pp. 169-180.


[3] Wheatley T. & Haidt, J. (2005). ‘Hypnotic disgust makes moral judgments more severe’, Psychological Science, October, 16 (10), pp. 780-784.