**Saturday, 21th July 2012**

Westminster Retreat is a pleasant resort on the edge of a nature reserve. It is not completely apart from civilization - there are many small houses around - but it is surrounded by trees obscuring the view of civilization. There is a garden, a pool, benches, but for our minicamp most importantly two buildings, "manor" and "lodge", where we had most of the program. In the manor we had a dining hall, lecture hall, and a conference room on the ground floor, and rooms for guests on the first floor. In the lodge we had two lecture rooms on the ground floor, and also rooms for guests on the first floor.

This minicamp had 24 participants. Sometimes we had a lecture all together, but mostly we were divided to smaller groups, typically three (8+8+8 or 12+6+6). For lectures we used the rooms in both buildings. At the beginning we received a schedule: who, when, where, what. Barbara and I were usually in different groups, but we had mostly the same program, only what one of us had in the morning, the other one had in the afternoon, and vice versa. The lectures were 45 or 90 minutes long, some of them were split to more parts.

In the garden there sometimes scurried some American birds; they had color like sparrows and shape like ducks. Sometimes we saw a fawn. The staff warned us about foxes coming in the evening; that they never attack people, just bark at them from the bushes. I did not see the foxes.

Before I start describing the Rationality Minicamp program, I have to praise the cooks. Rationalists are picky eaters: vegetarian diets, paleo diets, gluten-free diets... everyone had their own. Also all the meals tasted great and seemed very healthy, which I appreciate especially after my experience with buying the local food.

So, finally...

At the beginning we received printed materials for the lessons. (Today they really help me writing this blog.) First was a checklist of rationality habits that everyone could fill for themselves. What do we mean by "rationality habits"? For example:

- When someone tells me something ambiguous, I notice it and ask them for a specific example.
- I notice when my mind searches an argument
*for a side*instead of neutrally evaluating the available information, and I realize that it is a mistake. - I notice my lack of curiosity.
- I notice myself trying to avoid certain thoughts.
- I seek for the real (historical) origins of my opinions, emotions, and habits; and while doing so I can ignore the justifications which are not the true origins.
- When I disagree with someone, I try to make a specific
*prediction*about which we disagree, to make sure that our disagreement is about the real world, not just about the words. - I do not rely on the magical "free will", but create conditions and habits which influence my behavior in a desired direction.

This is just a few items of a long list; many of them could be discussed separately. In the questionnaire we also had specific examples. The difficult part was to mark at each of these habits *when did I recently use it* in some specific situation. (Today or yesterday? Last week? Last month? Last year? Earlier? Never?) After filling the first two pages I started feeling like a fool, so I postponed the questionnaire with the excuse that so much English text makes me tired. I never completed it. But if the goal was to show us that we still miss a lot of rationality skills, this goal was already one hundred percent completed.

(A specific example: A few weeks before the minicamp I realized that I not only do not write anything for my web page, which I developed intensely between December and February, but I *don't even think about it* anymore. Why? First I had some small complications in my life, the blogging was not the top priority, and I did not have time for it. Later, when the complications were solved, I felt ashamed for failing in my original plan to write an article every week. To make the shame bigger, by the irony of fate, my recent published article was a Slovak translation of How to Beat Procrastination. My brain, rather willing in similar situations, found a simple solution how to deal with the feelings of shame: just not think about it. Which prolonged my idleness, which increased the feelings of shame at the occasional moments of thinking about my web page. Luckily, I realized this mechanism a few weeks before the minicamp, so I decided to make a lot of notes and photos there, and use them to restart my blog again.)

Then we had a more optimistic "scavenger hunt" - a space to write various notes about ideas we have during the minicamp, which could improve our lives after we return home. Here we were supposed to write our ideas anytime during the week, so we don't forget them in the tide of more information, nor lose them in the heaps of other text. This space for notes was divided to many parts with inspiring captions; again, here are just a few examples:

- A question important for you, that you could easily collect important information for.
- A behaviour, whose goals you have understood, and now you want to change it.
- A cheap way to free up some attention.
- An emotional reaction you will try to change - and how specifically.
- An attitude you have internalized in the past from your parents, community, or friends.

(A specific example: Being an active user of many internet services, besides the typical spam I also receive many various notices. I don't want to erase them completely, but on the other hand they burden my attention, especially when someone sends me three reminders about the same thing, does not make distinction between important and trivial things, or just regularly reminds me about their existence. Every day I get a few mails I do not want to process at the given moment, neither to leave them unread in the inbox, nor to decide whether I should move them to another folder etc. I realized that this inconvenience can be almost fully solved by dedicating one afternoon to click through the settings of all these services, reduce the notices to minimum, and create a "Services" folder with subfolders for each persistent service, plus a rule that will automatically move given mails to the subfolder and mark them as read; so that *I can decide* to handle them later in a time convenient *for me*. So I did, and now the incoming mails do not bother me so much.)

Another all-minicamp activity was a *prediction market*. On the walls we had posters with various predictions, such as "It will be raining on Monday at 12:00." Our task was to make bets about *probability* of the given prediction being true or false. Some predictions were published by the organizers; we could also suggest our own predictions, under condition that their true or false status will be unambiguously decided before the end of the minicamp. We also received playing chips to make bets between us at any time.

Why this all? How is betting, gambling, related to rationality? Shouldn't rational people actually avoid gambling and stick to what is 100% sure?

Probability is an aspect of life. Some probabilities are so low or so high that we can practically treat them as being 0% or 100%, but many probabilities important for us are somewhere in the between. A probability that it will rain tomorrow at noon. A probability that I can buy a fresh bread in the shop. A probability that after a year or ten years I will be employed, healthy, or even alive. These things we can not know for sure; we can imagine both results, but for rational decisions we also need to know, at least approximately, their likelyhoods. It is not good to just lump all the probabilities between 1% and 99% together as "something that could be so, or could be otherwise".

It is easiest to illustrate the importance of probability estimates on the repeated events. One event is random, many events are a statistic following the law of great numbers. I do not know what number will I roll on a die. But I can tell that if the die is fair and I roll it thousand times, each number will come in approximately one sixth of the results. If I knew statistics better, I could also say intervals and the probability of total number of sixes being in the given intervals. For example I can rely on getting at least one six in the thousand rolls; because even if withing the next billion years the humanity would inhabit billion planet in the galaxy, and on each planet billions of people would roll dice all the time in series of thousand rolls, it is improbable that anyone would have once rolled a series without any six. A probability of a meteor falling from the sky and killing me in this very moment is more than a million times higher.

But back to the everyday life. Suppose I go for a trip every weekend, if the weather forecast says the probability of rain in smaller than 5%. Looking at a specific day, the result is random: I either get wet or not; I more expect to not get wet, but I cannot rule it out. Looking at the whole year, it is a statistics. Let's suppose I was together on 100 trips, and the probability of rain was always exactly 5%. It means I got wet *on the average* during 5 trips; with probability 90% it was between 2 and 8 trips; with probability 99% between 1 and 11 trips. The chance to get wet during 16 or more trips is a year is cca 1:10,000; during 18 or more trips cca 1:2,000,000. The repetitions move us from the usual probabilities to the realm of "almost certainties".

But many situations do not repeat. I cannot verify my estimate of probability of European Union dissolving within ten years by repeating the experiment thousand times. But I could group my predictions by their assigned probabilities and at least check those groups. For example if in hundred different situations I have made predictions with 90% certainty, then my predictions should some true in approximately 90% of these situations, even if the situations were completely different. If I have a habit of saying things happen with 90% probability, but half of the time I am wrong, then almost certainly I overestimate my predictions. On the other hand, if I have a habit of speaking about 90% probability, but I was right in each of the many prediction, maybe I underestimate my predictions.

However, most people do not specify their predictions in percents; and when they do, their estimates are usually way imprecise. When an average person speaks about 90% certaintly, the real certainty is about 2/3. When companies are managed using such estimates, billions of losses occur. Too small probabilities are either ignored completely, or way overestimated. Which is why an average person either does not have an insurance, or gets persuaded to buy something a hundred time more expensive than appropriate. And it is rather easy to fix the worst biases -- someone can give you a thousand questions, for example from an encyclopedia, and you write your guesses and their probabilities. At the end you evaluate the results and gradually learn which *feeling of certainty* corresponds to which numerical *probability*. This was our homework, with the help of computer. In one afternoon you can learn to distinguish between 60% and 90%, or 90% and 99%. Which is a decent level, unless you want to invest on stock market or play poker professionally. Imagine that you have a plan whose success depend on three independent factors *A, B, C*. If the probabilities of all three factors are 60%, or 90%, or 99%, the resulting probability of success of the plan is 22%, or 73%, or 97%. I hope you agree there is a huge difference between your projects succeeding in three of four cases, or in one case of five; or whether a disaster happens in one of your thirty plans, or each fourth.

The statistics of probability estimates is also a way to measure human rationality. If someone is always right (which does not mean: "others agree with him/her", but: "his/her opinions are confirmed by reality, whether others agree or not"), they probably understand things better than one who is correct only randomly once in a time. But how can we evaluate the correctness of probability estimates? What if a rational preson says: "probability 10% it will rain tomorrow", but two irrational people respond by: "probability 100% it will rain" and "probability 100% it will not rain"? One event is not enough to show the difference; one of the irrational ones will be also proven right, and their self-confident claim can make even better impression. Only the statistics will show that this self-confident person is bluffing, and their predictions could be taken with a pinch of salt. Only a long-term statistics will show whether the estimated 10% is really 10%, not more, not less.

Now imagine two people: the first says tomorrow it will rain with probability 10%, the second says 20%. Later they will both argue with someone else about something else, but the same situation (the same two people, the same topic, the same estimates) will likely never repeat again. How can we measure rationality in this case?

The solution is to calculate how many bits of information each of them added to the other one. Bit is a binary logarithm. Let's suppose it *did* rain. The first one predicted rain with probability 10%, the second one increased it to 20%, which is twice as much. This is one bit of information, because *log _{2}(20%/10%) = 1*. On the prediction market the first person would lose one bit, and the second person would gain one bit.

On the other hand, if it did *not* rain, the second person predicted this outcome with probability 80% (we use the probability of *not* raining, not the probability of raining; always the probability of what *really happened*), the first one increased it to 90%. This is 0.17 bits of information, because *log _{2}(90%/80%) = 0.17*. On the prediction market the second person would lose 0.17 bits, and the first person would gain 0.17 bits. For the sake of convenience we could calculate in centibits; in the former case the victory is 100 centibits, in the latter case 17 centibits.

If you never did such calculation, you may be surprised why the rewards are assymetric; why 100 centibits in the former case and only 17 in the latter. But remember that *both* participants *agree* that the probability of not raining is much larger than the probability of raining. The argument is only about whether four times or nine times larger. If the first one is right, in 1 of 10 cases they lose 100 points, but in 9 of 10 cases they gain 17 points, which is a long-term win. But if the second one is right, in 8 of 10 cases they lose 17 points, but in 2 of 10 cases they gain 100 points, which is also a long-term win. The total number of points is conserved; they cannot both gain points by making a clever argument. (By the way, if you are good at math, you can see how many points would lose a person by predicting something at 100% and being wrong. In the minicamp, estimates of 0% and 100% were forbidden.)

If more people express their estimates on the same topic during a longer time period, it works like this: The first person writes their estimate on the poster. The second person writes their estimate below, etc. When the result is known, the first person is compared with the second person, the second person with the third, etc.; everyone is compared only with those immediately above and below them. Always the currect estimate is the *most recently written number*. One person could write their estimate on the same topic more times, for instance if they change opinion, or if they get corrected by someone else, but they still firmly believe their original opinion. For example if the rain forecasts are in order 10%, 20%, 15%, and it did rain, the first person loses 100 points, the second one gains 142 points, and the third one loses 42 points, because there is 100 centibits between 10% and 20%, and 42 centibits between 20% and 15%.

From % to % the additional information is centibits.

viliam@bur.sk