Tuesday, January 25, 2022

A Note on Personal Rationality

I. Bertrand Russell makes a rather funny argument in his thoughtful little essay In Praise of Idleness

One of the commonest things to do with savings is to lend them to some Government. In view of the fact that the bulk of the public expenditure of most civilised Governments consists in payment for past wars or preparation for future wars, the man who lends his money to a Government is in the same position as the bad men in Shakespeare who hire murderers. The net result of the man’s economical habits is to increase the armed forces of the State to which he lends his savings. Obviously it would be better if he spent the money, even if he spent it in drink or gambling.

But, I shall be told, the case is quite different when savings are invested in industrial enterprises. When such enterprises succeed, and produce something useful, this may be conceded. In these days, however, no one will deny that most enterprises fail. That means that a large amount of human labour, which might have been devoted to producing something that could be enjoyed, was expended on producing machines which, when produced, lay idle and did no good to anyone. The man who invests his savings in a concern that goes bankrupt is therefore injuring others as well as himself. If he spent his money, say, in giving parties for his friends, they (we may hope) would get pleasure, and so would all those upon whom he spent money, such as the butcher, the baker, and the bootlegger. But if he spends it (let us say) upon laying down rails for surface cars in some place where surface cars turn out to be not wanted, he has diverted a mass of labour into channels where it gives pleasure to no one. Nevertheless, when he becomes poor through the failure of his investment he will be regarded as a victim of undeserved misfortune, whereas the gay spendthrift, who has spent his money philanthropically, will be despised as a fool and a frivolous person.

While I tend to agree with a lot of the spirit of this passage, especially concerning the virtue of one who throws parties, something has obviously gone wrong here. There is a particular part I would like to discuss in relation to what I take to be a fallacy of reasoning. It is the inference from "no one will deny that most enterprises fail" (which is certainly true) to the implied conclusion: "no individual should invest in some industrial enterprise", the bridging premise being that the money could have been used fruitfully, by the individual, elsewhere. In this case, to produce some idle (in the positive sense) good. To be fair to Russell, he is arguing more for a revaluation of what we ought to find valuable, nothing he says really hinges on this point, but I'd like to explore this inference a little further, as it made me think.


II. We can represent the argument form I have in mind as follows:
(1) Most enterprises fail
(2) If you start or invest in an enterprise, then it will most likely be a waste of money.
(3) You should not start or invest in any enterprises.

We can interpret (2) as either a moral waste, as Russell does, or as a prudential waste (in the sense that you'll lose money). Either way, the premise has normative import for you. Now, since we have a probabilistic premise (2), (3) will only follow depending on the meaning of "most" and "most likely." Let's plug in some numbers. According to a three-second Google search, 90% of start-ups fail in the first two years. These are pretty grim numbers! In this domain at least, Russell seems to be entitled to his inference. We might even go further: due to how bad these numbers are, it is thoroughly irrational to bother starting or investing in start-ups at all. This is what I take to be obviously wrong, as you likely do too, reader. What's interesting is why.

One of the reasons this argument fails is the fallacy I have in mind. Here I will describe how it has unfolded in this particular case, then I will make its form explicit. First of all, investing. This is only true if we assumed that people invest in start-ups by selecting a random company from a representative sample of the whole population of start-ups. This is obviously almost never true. (If this is how you invest, I've got bad news for you!) When someone invests in a start-up, it is precisely because they believe that it is not your average start-up and that it has exceptional potential—namely, potential to be in the top 10%. Second, starting a start-up. The same reasoning is true. The entrepreneur is creating some product because he thinks either that he is capturing an untapped market, or is cashing in on one not quite saturated, or perhaps he thinks he offers some extra good that will edge out the competitors. Regardless of the particular content of some enterprise, the entrepreneur believes that their idea has exceptional potential. They are, after all, actioning that idea.

You might object at this point to offer a charitable interpretation of this inference. It is not saying that no one should never start or invest in any enterprises, it is just saying that if we are such an entrepreneur or investor, we ought to decrease our confidence in any particular venture actually succeeding. Now, this is no doubt true, but it is entirely trivial. First of all, most people who are investing in serious ventures are already well aware of such statistics and are still betting on them anyway; reinforcing the fact that their confidence is exceptional. Second, even if there were people greatly invested in something who didn't know such things, I find it unlikely this would drop their confidence enough to stop in their tracks (even if there were no sunk cost effects). They believed it exceptional enough to get to the point they are at, why would they stop now? If I told my landlord, who has started a company and is developing an app right now, "you do realise 90% of start-ups fail right?" He would say: "so what? My app is going to blow up the market, it fucking rocks." He is not irrational precisely because he believes his app fucking rocks (even if his app actually sucks).


III. The central fallacy involved in the reasoning here is that we are making what I will call an individual inference from averages. What do I mean by this? It is when you take the spread of outcomes in some domain for a population and infer something, based purely on that, about what a given individual should do. Thus, it is to suppose that such inferences are rational for that individual. I am not arguing that such statistics are not genuinely representative or that they're uninteresting. I am arguing almost nothing can be rationally concluded by a particular individual in light of such statistics. This fallacy is just a special case of mistaking the map for the territory. 

The reason why this is a fallacy is trivial: you are an individual and thus lie on exactly one point of the distribution. The thing you should do is rigidly determined by where you lie on that continuum and the thing that is rational for you to do is rigidly determined by where you believe yourself to lie on that continuum. On top of that, you either have relatively good knowledge of where you lie on such a distribution, or, at minimum, you believe you do, but actually do not. Thus, every would-be entrepreneur and founding investor believes themselves to be in the 10%. Of course, it is statistically impossible that this is true (if the stat makes robust predictions), but nonetheless, the fact that these people are out there taking risks tells us that this is what they believe. (I doubt they view themselves as infallible but I'm guessing many have more than 70% confidence they will succeed.) For each individual person taking risks, premise (2) of the argument is simply false. The probability of the population does not transpose to their individual probability, because it doesn't take into account the fact that their start-up fucking rocks.

One objection raised here might be to say "no Rowan, the person who created the Dettol 'No-Touch Hand Wash System' was not rational for doing so. They should have known creating a product with a feature that is self-defeating will never hold up for long—it was a waste of money! The same thing holds for that new Metaverse thing that guy is creating: it has graphics worse than vanilla World of Warcraft and you can't even give your avatar a foxtail or dragon wings. It's stupid and it will fail; they are fundamentally irrational wastes of money." This objection does not hold. Not because I don't agree with them that this stuff is stupid, but because objective outcomes (truths) do not determine rationality, beliefs do. If you believe something, that is, you take it to be true (or probably true), then it is rational for you to pursue whatever consequences you believe to follow from that belief (because you think they are true). This is right regardless of what people outside their belief scheme think of the project and this is true for all individuals across all possible distributions, not merely just risk-takers, but of all decisions. (You can read a little more about my views on this in the final section of this post.)

As an aside, this is one of the first philosophical thoughts I have memory of having. It didn't come in exactly this form regarding rationality, but I was onto the same principle in a particular case I came across. I remember learning about missionaries in very early high school and how they would travel across the world to recently colonised land only to build a church and go about converting as many "heathens" as they possibly could. For one, I found this deeply immoral, but more than anything found it incredibly bizarre (especially since I grew up around virtually no religion). The idea that people would travel across the world to enforce their own beliefs on others in the name of "salvation", which I found completely implausible, was utterly stupefying. This led me to try to explain the actions of these people somehow: what could possibly drive it?  Thinking more about God, punishment, and hell, led me to an answer. My thought went as follows: if you seriously believe, with all your heart, that nonbelievers will go to hell and be punished for eternity, then it is not merely good that you go overseas and preach, it is an absolute moral imperative that you convert as many non-believers as possible. This is because each person saved is the prevention of infinite suffering in hell, and the addition of infinite pleasure in heaven. Such was my thinking at the time, and such as I think now. This does mean that I now think that such actions were not deeply immoral, they were. I just don't think they were irrational for doing what they did, they were just wrong

Anyway, the lesson here is that almost no inference from a population to an individual holds for that individual. This is because almost no individual will believe themselves to be at the same point on the distribution as the population average. Even if the individual was wrong about what they believed to be their capacity after the fact, the inference I am targeting still did not hold at the time of decision. We see this very clearly in the case of confident start-ups. We haven't even discussed the majority of the population that assign themselves a less than population-representative chance of success. Most people probably think they have a less than 10% chance of making a start-up work, or even those more confident may correctly decide that not even coin flip confidence is enough. 


IV. I want to introduce three more brief examples of this. The first is a contrived example to draw out the point I'm making, while the second and third are real-world examples. 

First, suppose that everyone was given an honours level mathematics exam that they have a choice to complete. They could either choose not to take it and nothing happens or they could choose to take it and if they pass (grade of >50%) they get one million dollars, while if they lose they get killed. Let's suppose (somewhat accurately), 99.99% of the population would fail this exam. Thus, we have the argument:

(1) 99.99% of the population would fail a honours level mathematics exam.
(2) If you attempted the exam, then you would most likely (99.99% chance) fail and die.
(3) You should not take the exam.

At first blush this seems like a good inference, especially for most people. However, it is not. It is not a good inference because premise (1), the population statistic, is completely independent of whether or not some individual should take the exam, premise (3). For math majors, postgraduates, and math nerds that work on high-level mathematics, this is a great deal—all they need to do is pass. Thus, for this population, (3) is false. For literally everyone else, this is a terrible deal because the exam paper will look like incomprehensible gibberish to them. But the deal is not terrible because a person randomly picked from the population is likely to have failed (because (1) is true), but because these individuals each (correctly) know that they cannot pass an honours level math exam. (2) is true, for this population. The statistic is logically posterior to the individual, not the other way round.

Second, supposedly 3% of those with PhDs become professors. Thus, someone might tell you "there's a 3% chance of you becoming a professor!" They might go further and say that "you should be realistic about your chances", implying that you forge some backup plan. Perhaps this is good advice for some, those who we interpret as having beliefs that diverge too far from their actual abilities, but it is not good for others. If you have an impeccable CV, experience, a record of publishing, all the things it is generally agreed upon for it to take to be a professor, then perhaps one does not need this advice? Once again, the best that can be said for the stat is that it might lower our confidence a little bit. There is nothing irrational in the 80% of postdocs that fight to stay in higher education, even though only 10% of them will make it, because they (erroneously) believe they are good enough.

Third, an example of a more political flavour. One example I have seen used in the US before having a gun in the household. Take this Vox article, "Living in a house with a gun increases your odds of death", with the subtitle "It's an unnecessary risk." This is a perfect example of what I'm talking about, the language used constantly shifts from population-level outcomes to personal ascriptions of risk: "Guns can kill you in three ways: homicide, suicide, and by accident. Owning a gun or having one readily accessible makes all three more likely." There are many reasons you shouldn't have a gun in the house if you are an individual, i.e., you are suicidal, homicidal, or practice atrocious gun safety and have no intention to improve. If you are these things, I'm sure your odds of death would increase. However, what if you are pretty confident that you are none of these things? (You can usually tell.) You'll probably be fine!


V. What I find even more interesting about this, and what I would actually like to impart to you, is that adopting an attitude that repudiates such attempts to enumerate you into the non-existence of probabilistic outcomes, is actually for the most part correct. We are each utterly unique individuals that radically differ from everything else in the world. We can never learn about what we should do, or what is rational for us, by looking at statistics and trends, without first correctly placing ourselves relative to those trends. Here is the principle we get out of analysing these arguments:

The only determinant of what you should do is the actual constitution of yourself. Thus, the better you know yourself, the better you are able to align your beliefs, and thus your rational objectives, with the actual constitution of yourself. 

This is is what it is to flourish. Thankfully, our self happens to be something that we also have pragmatic access to. By exploring and experimenting with your possibilities, you come to learn your own capacities, abilities, and perhaps most importantly, your limits. Know yourself and you will know what you should do. Know yourself and you will know how to live a good life—at least for you.


Note: A friend of mine pointed out that this lines up somewhat with the distinction and associated fallacies that conflate ensemble and time probability, where ensemble probability is the probability of some outcome for one person in a group after one trial and time probability is the probability an individual will reach some outcome over time. Ergodicity tells us that repeated exposures to ruinous risk will eventually exhaust the state space—and will ruin us. One who naively keeps playing Russian roulette after their first win is taking their ensemble probability rather than their time probability. Interestingly this is a special case of the distinction I am making, not the other way round. If you knew enough about your objective chance of winning over time (your time probability) and that you would necessarily lead yourself to ruin, you wouldn't have kept playing!

No comments:

Post a Comment