A convergence of moral motivations

My aim in this post is to outline a variety of motivations that all point me in broadly the same direction: toward helping others in general and prioritizing the reduction of suffering in particular.


Contents

  1. Why list these motivations?
  2. Clarification
  3. Compassion
  4. Consistency
  5. Common sense: A trivial sacrifice compared to what others might gain
  6. The horror of extreme suffering: The “game over” motivation
  7. Personal identity: I am them
  8. Fairness
  9. Status and recognition
  10. Final reflections

Why list these motivations?

There are a few reasons why I consider it worthwhile to list this variety of moral motivations. For one, I happen to find it interesting to notice that my motivations for helping others are so diverse in their nature. (That might sound like a brag, but note that I am not saying that my motivations are necessarily all that flattering or unselfish.) This diversity in motivations is not obvious a priori, and it also seems different from how moral motivations are often described. For example, reasons to help others are frequently described in terms of a singular motivation, such as compassion.

Beyond mere interest, there may also be some psychological and altruistic benefits to identifying these motivations. For instance, if we realize that our commitment to helping others rests on a wide variety of motivations, this might in turn give us a greater sense that it is a robust commitment that we can be confident in, as opposed to being some brittle commitment that rests on just a single wobbly motivation.

Relatedly, if we have a sense of confidence in our altruistic commitment, and if we are aware that it rests on a broad set of motivations, this might also help strengthen and maintain this commitment. For example, one can speculate that it may be possible to tap into extra reserves of altruistic motivation by skillfully shifting between different sources of such motivation.

Another potential benefit of becoming more aware of, and drawing on, a greater variety of altruistic motivations is that they may each trigger different cognitive styles with their own unique benefits. For example, the patterns of thought and attention that are induced by compassion are likely different from those that are induced by a sense of rigorous impartiality, and these respective patterns might well complement each other.

Lastly, being aware of our altruistic motivations could help give us greater insight into our biases. For example, if we are strongly motivated by empathic concern, we might be biased toward mostly helping cute-looking beings who appeal to our empathy circuits, like kittens and squirrels, and toward downplaying the interests of beings who may look less cute, such as lizards and cockroaches. And note that such a bias can persist even if we are also motivated by impartiality at some level. Indeed, it is a recipe for bias to think that a mere cerebral endorsement of impartiality means that we will thereby adhere to impartiality at every level of our cognition. A better awareness of our moral motivations may help us avoid such naive mistakes.

Clarification

I should clarify that this post is not meant to capture everyone’s moral motivations, nor is my aim to convince people to embrace all the motivations I outline below. Rather, my intention is first and foremost to present the moral motivations that I myself am compelled by, and which all to some extent drive me to try to reduce suffering. That being said, I do suspect that many of these motivations will tend to resonate with others as well.

Compassion

Compassion has been defined as “sympathetic consciousness of others’ distress together with a desire to alleviate it”. This is similar to having empathic concern for others (compassion is often regarded as a component of empathic concern).

In contrast to some of the other motivations listed below, compassion is less cerebral and more directly felt as a motivation for helping others. For example, when we experience sympathy for someone’s misery, we hardly need to go through a sequence of inferences in order to be motivated to alleviate that misery. The motivation to help is almost baked into the sympathy itself. Indeed, studies suggest that empathic concern is a significant driver of costly altruism.

In my own case, I think compassion tends to play an important role, though I would not claim that it is sufficient or even necessary for motivating the general approach that I would endorse when it comes to helping others. One reason it is not sufficient is that it needs to be coupled with a more systematic component, which I would broadly refer to as ‘consistency’.

Consistency

As a motivation for helping others, consistency is rather different from compassion. For example, unlike compassion, consistency is cerebral in nature, to the degree that it almost has a logical or deductive character. That is, unlike compassion, consistency per se does not highlight others’ suffering or welfare from the outset. Instead, efforts to help others are more a consequence of applying consistency to our knowledge about our own direct experience: I know that intense suffering feels bad and is worth avoiding for me (all else equal), and hence, by consistency, I conclude that intense suffering feels bad and is worth avoiding for everyone (all else equal).

One might object that it is not inconsistent to view one’s own suffering as being different from the suffering of others, such as by arguing that there are relevant differences between one’s own suffering and the suffering of others. I think there are several points to discuss back and forth on this issue. However, I will not engage in such arguments here, since my aim in this section is not to defend consistency as a moral motivation, but simply to present a rough outline as to how consistency can motivate efforts to help others.

As noted above, a consistency-based motivation for helping others does not strictly require compassion. However, in psychological terms, since none of us are natural consistency-maximizers, it seems likely that compassion will usually be helpful for getting altruistic motivations off the ground in practice. Conversely, as hinted in the previous section, compassion alone is not sufficient for motivating the most effective actions for helping others. After all, one can have a strong desire to reduce suffering without having the consistency-based motivation to treat equal suffering equally and to spend one’s limited resources accordingly.

In short, the respective motivations of compassion and consistency seem to each have unique benefits that make them worth combining, and I would say that they are both core pillars in my own motivations for helping others.

Common sense: A trivial sacrifice compared to what others might gain

Another motivation that appeals to me might be described as a commonsense motivation. That is, there is a vast number of sentient beings in the world, of which I am just one, and hence the beneficial impact that I can have on other sentient beings is vastly greater than the beneficial impact I can have on my own life. After all, once my own basic needs are met, there is probably little I can do to improve my wellbeing much further. Indeed, I will likely find it more meaningful and fulfilling to try to help others than to try to improve my own happiness (cf. the paradox of hedonism and the psychological benefits of having a prosocial purpose).

Of course, it is difficult to quantify just how much greater our impact on others might be compared to our impact on ourselves. Yet given the enormous number of sentient beings who exist around us, and given that our impact potentially reaches far into the future, it is not unreasonable to think that it could be greater by at least a factor of a million (e.g. we may prevent at least million times as many instances of similarly bad suffering in expectation for others than for ourselves).

In light of this massive difference in potential impact, it feels like a no-brainer to dedicate a significant amount of resources toward helping others, especially when my own basic needs are already met. Not doing so would amount to giving several orders of magnitude greater importance to my own wellbeing than to the wellbeing of others, and I see no justification for that. Indeed, one need not endorse anything close to perfect consistency and impartiality to believe that such a massively skewed valuation is implausible. It is arguably just common sense.

The horror of extreme suffering: The “game over” motivation

A particularly strong motivation for me is the sheer horror of extreme suffering. I refer to this as the “game over” motivation because that is my reaction when I witness cases of extreme suffering: a clear sense that nothing is more important than the prevention of such extreme horrors. Game over.

One might argue that this motivation is not distinct from compassion and empathic concern in the broadest sense. And I would agree that it is a species of that broad category of motivations. But I also think there is something distinctive about this “game over” motivation compared to generic empathic concern. For example, the “game over” motivation seems meaningfully different from the motivation to help someone who is struggling in more ordinary ways. In fact, I think there is a sense in which our common circuitry of sympathetic relating practically breaks down when it comes to extreme suffering. The suffering becomes so extreme and unthinkable that our “sympathometer” crashes, and we in effect check out. This is another reason it seems accurate to describe it as a “game over” motivation.

Where the motivations listed above all serve to motivate efforts to help others in general, the motivation described in this section is more of a driver as to what, specifically, I consider the highest priority when it comes to helping others, namely to alleviate and prevent extreme suffering.

Personal identity: I am them

Another motivation derives from what may be called a universal view of personal identity, also known as open individualism. This view entails that all sentient beings are essentially different versions of you, and that there is no deep sense in which the future consciousness-moments of your future self (in the usual narrow sense) is more ‘you’ than the future consciousness-moments of other beings.

Again, I will not try to defend this view here, as opposed to just describing how it can motivate efforts to help others (for a defense, see e.g. Kolak, 2004; Leighton, 2011, ch. 7; Vinding, 2017).

I happen to accept this view of personal identity, and in my opinion it ultimately leaves no alternative but to work for the benefit of all sentient beings. In light of open individualism, it makes no more sense to endorse narrow egoism than to, say, only care about one’s own suffering on Tuesdays. Both equally amount to an arbitrary disregard of my own suffering from an open individualist perspective.

This is one of the ways in which my motivations for helping others are not necessarily all that flattering: on a psychological level, I often feel that I am selfishly trying to prevent future versions of myself from being tortured, virtually none of whom will share my name.

I would say that the “I am them” motivation is generally a strong driver for me, not in a way that changes any of the basic upshots derived from the other motivations, but in a way that reinforces them.

Fairness

Considerations and intuitions related to fairness are also motivating to me. For example, I am lucky to have been born in a relatively wealthy country, and not least to have been born as a human rather than as a tightly confined chicken in a factory farm or a preyed-upon mouse in the wild. There is no sense in which I personally deserve this luck over those who are born in conditions of extreme misery and destitution. Consequently, it is only fair that I “pay back” my relative luck by working to help those beings who were or will be much less lucky in terms of their birth conditions and the like.

I should note that this is not among my stronger or more salient motivations, but I still think it has significant appeal and that it plays some role for me.

Status and recognition

Lastly, I want to highlight the motivation that any cynic would rightly emphasize, namely to gain status and recognition. Helping others can be a way to gain recognition and esteem among our peers, and I am obviously also motivated by that.

There is quite a taboo around acknowledging this motive, but I think that is a mistake. It is simply a fact about the human mind that we want recognition, and this is not necessarily a problem in and of itself. It only becomes a problem if we allow our drive for status to corrupt our efforts to help others, which is no doubt a real risk. Yet we hardly reduce that risk by pretending that we are unaffected by these drives. On the contrary, openly admitting our status motives probably gives us a better chance of mitigating their potentially corrupting influence.

Moreover, while our status drives can impede our altruistic efforts, we should not overlook the possibility that they might sometimes do the opposite, namely improve our efforts to help others.

How could that realistically happen? One way it might happen is by forcing us to seek out the assessments of informed people. That is, if our altruistic efforts are partly driven by a motive to impress relevant experts and evaluators of our work, we might be more motivated to consider and integrate a wider range of informed perspectives (compared to if we were not motivated to impress such evaluators).

Of course, this only works if we are indeed motivated to impress an informed audience, as opposed to just any audience that may be eager to throw recognition after us. Seeking the right audience to impress — those who are impressed by genuinely helpful contributions — might thus be key to making our status drives work in favor of our altruistic efforts rather than against them (cf. Hanson, 2010; 2018). 

Another reason to believe that status drives can be helpful is that they have proven to be psychologically potent for human beings. Hence, if we could hypothetically rob a human brain of its status drives, we might well reduce its altruistic drives overall, even if other sources of altruistic motivation were kept intact. It might be tantamount to removing a critical part of an engine, or at least a part that adds a significant boost.

In terms of my own motivations, I would say that drives for status probably often do help motivate my altruistic efforts, whether I endorse my status drives or not. Yet it is difficult to estimate the strength and influence of these drives. After all, the status motive is regarded as unflattering, and hence there are reasons to think that my mind systematically downplays its influence. Moreover, like all of the motivations listed here, the status motive likely varies in strength depending on contextual factors, such as whether I am around other people or not; I suspect that it becomes weaker when I am more isolated, which in effect suggests a way to reduce my status drives when needed.

I should also note that I aspire to view my status drives with extreme suspicion. Despite my claims about how status drives could potentially be helpful, I think the default — if we do not make an intense effort to hone and properly direct our status drives — is that they distort our efforts to help others. And I think the endeavor of questioning our status drives tends to be extremely difficult, not least since status-seeking behavior can take myriad forms that do not look or feel anything like status-seeking behavior. It might just look like “conforming to the obviously reasonable views of my peers”, or like “pursuing this obscure and interesting idea that somehow feels very important”.

So a key question I try to ask myself is: am I really trying to help sentient beings, or am I mostly trying to raise my personal status? And I strive to look at my professed answers with skepticism. Fortunately, I feel that the “I am them” motivation can be a powerful tool in this regard. It essentially forces the selfish parts of my mind to ask: do I really want to gain status more than I want to prevent my future self from being tortured? If not, then I have strong reasons to try to reduce any torture-increasing inefficiencies that might be introduced by my status motives, and to try, if possible, to harness my status motives in the direction of reducing my future torment.

Final reflections

The motivations described above make up quite a complicated mix, from other-oriented compassion and fairness to what feels more like a self-oriented motivation aimed at sparing myself (in an expansive sense) from extreme suffering. I find it striking just how diverse these motivations are, and how they nonetheless — from so seemingly different starting points — can end up converging toward roughly the same goal: to reduce suffering for all sentient beings.

For me, this convergence makes the motivation to help others feel akin to a rope that is weaved from many complementary materials: even if one of the strings is occasionally weakened, the others can usually still hold the rope together.

But again, it is worth stressing that the drive for status is somewhat of an exception, in that it takes serious effort to make this drive converge toward aims that truly help other sentient beings. More generally, I think it is important to never be complacent about the potential for our status drives to corrupt our motivations to help others, even if we feel like we are driven by a strong and diverse set of altruistic motivations. Status drives are like the One Ring: powerful yet easily corrupting, and they are probably best viewed as such.

“Team victory” as a key hidden motive

Simler and Hanson’s The Elephant in the Brain has been hugely influential on me. The core claim of the book is that our beliefs and behaviors often serve hidden motives, and that these motives are commonly less pretty than the more noble motives that we usually proclaim.

A key point that is mentioned in the book is the significance of coalitions and coalitional conflicts in human life and human evolution. Specifically, the authors note how, in small-scale coalition politics, “coalitions compete for control, and individuals seek to ally themselves with powerful coalitions”.

Yet it seems that there is more to be said about the significance of coalitional conflicts for our hidden motives than what is covered in The Elephant in the Brain (as the authors would surely agree). Indeed, the book is explicitly an open invitation for others to identify or suggest additional hidden motives, and I will here take up that invitation and suggest a general hidden motive that plausibly plays a large role in much human behavior, namely coalitional success, or “team victory”.

Different categories of hidden motives

It seems to me that we can meaningfully distinguish at least four categories of hidden motives (even as these categories are overlapping, and the motives not always hidden):

  • To signal impressiveness (e.g. by showing that you are impressively knowledgeable, athletic, or hard-working)
  • To signal loyalty (e.g. by wearing a sports jersey or a religious symbol)
  • To gain “team victory” (e.g. helping to ensure that your team gains more power than the rival team)
  • To gain “individual success” (e.g. actually getting the calories or sex needed for survival and reproduction)

Of course, from an evolutionary perspective, these motives must ultimately all translate into success in terms of “individual success”. Yet seeking individual success very directly is often a bad way to achieve such success for humans — hence these other motives and strategies. (Though it is worth noting that in certain circumstances, it would be fitness-enhancing to pursue “individual success” even at the expense of these other adaptive drives, meaning that there are cases where humans can gain “individual success” by doing things that are positively unimpressive, disloyal, or detrimental to team victory. These are sometimes called scandals.)

Hidden motives: Not all about signaling

An important point implied by the categories above is that hidden motives are not all about signaling, and that our signaling motives sometimes take a backseat to other hidden motives that are even stronger.

For example, signaling loyalty and ensuring team victory seem to be fairly convergent aims for the most part, yet there are probably still many cases where the “team victory” motive is stronger than the loyalty-signaling motive, such as when our actions have a significant influence on the probability of team victory (e.g. slightly reducing our perceived team loyalty — from 10 to 9, say — in exchange for a huge gain in our team’s success would likely have been adaptive in many cases).

Indeed, even when our actions do not have a high probability of influencing outcomes, such as in large-scale politics involving millions of other actors, it is likely that our evolved instincts — which were adapted for small-scale coalition politics — will in many cases still care as much or more about collective team victory as they care about individual loyalty-signaling to that team.

Reasons to think that “team victory” is a strong motive

What reasons do we have for thinking that “team victory” is a strong motive underlying much of human behavior?

The importance of coalitional success

First, there is the fact that individual human success often depended crucially on coalitional success, at the level of intra- as well as inter-group competition (both of which could be lethal). And merely signaling loyalty to one’s own coalition(s) — while important — would often not be sufficient to secure coalitional victory. A serious drive and effort toward actually winning was likely paramount.

As hinted above, actions that optimize for loyalty-signaling and actions that optimize for group victory are probably correlated to a significant extent, but not perfectly so, and individuals whose motives and instincts were optimized purely for loyalty-signaling would probably be less effective at achieving coalitional success than would individuals whose motives and drives were optimized more for that aim (i.e. individuals whose motives were to some degree optimized both for intra-group loyalty-signaling and for securing inter-group success and power).


Given the importance of inter-group success in our ancestral environment, it seems reasonable to think that our motives also reflect drives for such success to a significant extent. Hence we should not restrict our focus to intra-group success alone when analyzing human motives. (Note that this picture applies both to group struggles within and between tribes; after all, human social life consists of multiple nested coalitions.)

Some evolutionary theorists, including John Tooby and Leda Cosmides, have provided more elaborate arguments for the claim that we humans have strong coalitional instincts, similarly based on the importance of coalitional acumen in our ancestral environment. Related is Jonathan Haidt’s argument that “groupishness” is a deep feature of human nature. (See also Pinsof, 2018, ch. 3)

Empirical data and informative examples

In empirical terms, there are studies that show that we often prefer policies that disadvantage our outgroup (e.g. people in a foreign country), even when we have the option to choose win-win policies that benefit our own group as well. Such findings lend some support to a significant “team victory” motive in human decisions and behavior. (Of relevance, too, are “minimal group paradigm”; “realistic conflict theory”; Sapolsky, 2017, ch. 11; Clark et al, 2019.)

Examples where the “team victory” motive seems to positively eclipse the loyalty-signaling motive include cases in which people secretly cheat in order to secure “team victory” — behaviors that the cheaters sometimes know will get them hated among their ingroup if they get exposed. Some of the instances of cheating mentioned here appear to fit this pattern.

Board games in which different teams compete against each other may be another example. It seems that people are often more concerned about winning than about signaling loyalty to, and having a good standing among, their team mates; so much so that they sometimes even deride their entire team while being in a relentless rush to win. And this phenomenon also happens at times in sports. (Board games and sports are arguably both supernormal stimuli that trigger the players’ drive for “team victory” in more overt and systematic ways than do our everyday — mostly hidden — coalitional competitions.)

The fact that the derisive and practically anti-loyal behaviors described above seem to occur with some frequency in competitive domains — despite team cohesion generally being important for team victory, and despite loyalty-signaling in particular seeming reasonably correlated with team victory — suggests that the “team victory” motive is probably also lurking in more ordinary circumstances (where it is expressed in more group-aligned ways). Indeed, one can argue that the drive for “team victory” must be quite strong for it to override and to some extent counteract our otherwise powerful pro-cohesion and loyalty-signaling motives in this way, even if only occasionally.

Introspection

Similarly, consider your own direct experience when you play team sports or board games on a team. Do you plausibly most want to signal loyalty to your team or do you most want to win? While we should not base our views only or even mostly on introspective observations of this kind, they can still provide at least some additional evidence, especially if our felt drive for team victory is particularly strong. (There is, of course, individual variation in terms of how strongly people are motivated by “team victory” — for instance, some people do not seem to care at all whether they win in team board games. Yet the same is true of other hidden motives: some people do not seem particularly keen to signal loyalty or impressiveness in the usual ways, but that hardly undermines the claim that these are significant motives for most people.)

An explanation of “Sudden Patriotic Sports Obsession”?

Finally, one may argue that the “team victory” motive is supported by some of the surprising predictions that follow from the conjecture that we have strong desires and drives for “team victory”. For example, a prediction that arguably follows from this hypothesis is that people should generally feel a desire to see their national team win in major sports events that are highly publicized (e.g. the FIFA World Cup). And importantly, this should be true even of many people who do not usually follow that sport, and even if they do not identify strongly with their nationality. (Only “many people” because of factors such as individual psychological variation and a lack of exposure to the relevant media channels.)

As far as I can tell, this hypothesis is strongly vindicated. During widely popularized sports events, people who are usually neither sports fans nor patriots indeed tend to become mysteriously preoccupied with the fate of their national team (and I must admit that this is also true of myself: I somehow care about it, despite trying not to, and despite not watching any games).

Yet this phenomenon makes perfect sense if we have strong drives for “team victory” that can readily be triggered by a perception of direct competition between “our group” and “other groups”. This is not to say that an instinctive drive for “team victory” is the only factor that explains this phenomenon of “Sudden Patriotic Sports Obsession”, but it does seem a plausible explanatory factor.

Is the motive of “team victory” really hidden?

One may agree that the “team victory” motive is common and strong, yet dispute that it is at all hidden. It is, after all, unmistakably clear in the expressed desires and behaviors of athletes and dedicated sports fans, as well as in many other explicitly competitive arenas of human life.

However, in supposedly nobler and more cooperative spheres, such as in academia or in activist circles, the “team victory” motive does indeed appear quite hidden. Here, attempts to undermine the status of opposing groups, and to increase the status and influence of one’s own group, seem to often be packaged as “intellectual criticism” and “strategic disagreements”. In other words, the text of the conversation may be a technical discussion about some obscure claim, while the subtext — the underlying driver of the dispute — may be a fight for coalitional victory and dominance.

Of course, these are not the kinds of motives that sophisticated and prosocial folks are supposed to have, and hence such folks are forced to find more indirect and sophisticated ways to act them out.

Hanson makes a similar point about our lust for power — something that we would usually gain through “team victory” in our ancestral environment:

We humans evolved to lust after power and those who wield power, but to pretend our pursuit of power is accidental; we mainly just care about beauty, stories, exciting contests, and intellectual progress. Or so we say.

Indeed, like the other hidden motives identified by Simler and Hanson, our motive to achieve “team victory” and power is probably mostly unconscious in situations where we are not supposed to act on this motive, since being unconscious about such a norm-transgressing motive might make us better able to deny the accusation that we are acting on it.

Even in politics, which is obviously competitive, we almost always frame our motives purely in terms of impartial motives to “help the world” and the like (Simler and Hanson, 2018, ch. 16). We rarely frame them in terms of wanting our team to win, even though there is much evidence that this is in fact a strong motive underlying our political behavior.

Tentative critique

A point of criticism I would raise regarding The Elephant in the Brain is that it seems to focus almost exclusively on hidden signaling motives, and that it thereby underemphasizes other hidden motives, such as “team victory”. Yet to consistently give overriding weight to signaling explanations relative to other, often more disturbing and unflattering explanatory motives seems to me unwarranted. After all, signaling explanations — e.g. explanations that invoke loyalty-signaling motives over “team victory” motives — are not more plausible by default.

The following are some examples from the book where I think the “team victory” motive is likely to play a significant role (to be clear, I am not claiming that “team victory” necessarily plays a greater role than the hidden motives identified by Simler and Hanson; my claim is merely that the “team victory” motive plausibly also plays at least some significant role in these areas).

Conversation

Simler and Hanson emphasize impressiveness-signaling as the key hidden motive of our conversations, including when it comes to academic conversations in particular. This seems right to me. But it appears that “team victory” is also an important hidden motive in our conversations, and that it sometimes even overrides the impressiveness motive. In particular, many academic conversations and disputes are plausibly more driven by a crude desire for “team victory” than by a motive to signal impressiveness — especially when these disputes are chiefly impressive in terms of how primitively tribal they are.

Art

On art, the authors again highlight the individual motive to impress as the key hidden motive, and I again think they are right. But even here, I suspect that “team victory” can play a surprisingly significant role, beyond just the (also significant) motive of wanting to personally affiliate with impressive artists. For example, beautiful cities, such as Florence and Budapest, are themselves pieces of art that can provide a strong sense of pride and “team victory” to the local inhabitants — including their leaders — which might help explain the creation of all this art (even if “team victory” may not be the main motive). And note that this is arguably an even more cynical motive than is bare impressiveness; “we’re creating all this art to make a good impression on you” seems considerably more prosocial than “we’re creating all this art to beat your team”.

Likewise, people sometimes seem to view their best artists in much the same way that they view their best athletes: as individuals who can symbolically match and beat those on the other team. The same appears true of the way people sometimes view their best scientists, intellectuals, fashion models, etc. Our most famous and prestigious people can serve as tokens of team status and “team victory”.


“Our church is bigger than yours”

“But our parliament can beat your parliament”

Charity

Simler and Hanson argue that the main hidden motives behind charity are to signal our wealth and empathy. Again, I think they are right. But it seems plausible that charitable behavior can also be motivated to some extent by a desire for “team victory”, such as when people donate toward the promotion of their own religion, political faction, or activist ingroup.

Religion

The hidden motives the authors ascribe to religious behavior is community bonding and loyalty-signaling, which seems right. But “team victory” is probably also an influential motive (cf. Tuschman, 2013, ch. 7). An extreme example might be religious wars, in which one religion would essentially try to beat another, plausibly motivated in part by a drive for “team victory”. A less extreme example might be apologists and missionaries who seek to defend their faith and convert others — for many such people, the “team victory” motive plausibly plays some role, even if they also have other motives (e.g. being impressive to the ingroup, seeking to get into heaven, or genuinely trying to help other people).

Politics

The authors identify loyalty-signaling as a key hidden motive underlying our political behavior. This seems right. But as noted earlier, it is plausible that we are also strongly motivated by “team victory”. After all, even when following an election in private, partisan voters still seem to fervently root for the victory of “their team”, not too unlike people who eagerly want their team to win in board games or in sports. And again, just as many sports fans would be willing to quietly take off their sports jersey (i.e. their personal signal of loyalty) if they thought it significantly increased their team’s chances of winning, it seems that many political actors would likewise be willing to quietly forego loyalty signaling to a significant extent provided it could help their political team bring home the desired win.

Potential biases that follow from this?

Lastly, it is worth briefly pondering how this drive for “team victory” might bias our outlooks and priorities. A plausible bias I see is a tendency to overstate the extent to which “our team winning” is the key to creating better outcomes from an impartial perspective.

That is, our coalitional intuitions might at some level hold that “if our coalition wins, that is a total success; if their coalition wins, that is a total disaster”. After all, in terms of reproductive success, this was probably often true in the context of intense coalitional conflicts in our ancestral environment. But it seems considerably less true from an impartial perspective, especially in the context of modern political competition between similar parties, or among different factions of activists who have broadly similar aims.

In other words, our intuitions are plausibly much too afraid of (reasonably similar) “outgroups” in the modern political and altruistic landscape, and we may well overestimate how much better “our group” would do compared to “their group” when it comes to creating beneficial outcomes for everyone.


Acknowledgments

I am grateful to Tobias Baumann and Robin Hanson for helpful feedback.

Blog at WordPress.com.

Up ↑