Eliezer started the Saturday education with two lectures about **Bayesian probability calculation**. We started with simple mathematical exercises, and gradually progressed to real-life examples.

The calculation itself is very simple, you just have to be careful. Many people confuse "if *A* then *B*" with "if *B* then *A*", and this is similar, just with probabilities. You have to distinguish between "each third apple is rotten" and "each third rotten fruit is an apple". Even more precisely, the goal is to correctly convert between these two values using the necessary additional data.

For example, imagine a dangerous illness which infects one person in thousand. Scientists develop a test for this illness, with reliability 95%. (The tested patient will receive a correct information on their health with probability 95%, and an incorrect information with probability 5%.) You decide to be cautious and take the test, and it tell you that you are ill. What is your probability of being really ill?

Most people will think like this: "The test gives me with probability 95% a correct information, with probability 5% an incorrect one. Test said that I am ill. Therefore with a probability 95% I am ill, and with 5% healthy."

This line of thought contains one seemingly small methodological error, which happens to change the result dramatically. The essence is that new information changes probabilities. Before you learned the result of your test, you really *had* 95% probability of getting a correct result. But the information that *the test declared you ill* changed the probabilities. Having this information, you are no longer a random person in the patients group, but a random person in a "patients with positive results" group.

It can be more simply explained by a lottery. Before you roll a die, your probability of getting a six is 1/6. So you roll a die, but you don't look at the result. Even now you can say that there is a six with probability 1/6. You look at the die and see... Whatever exactly you see, the probability of a six is no longer 1/6. If you see that you have a six, then the probability became almost 100%. (The words "almost 100%" include a small chance that you made a mistake when counting the dots, or you are hallucinating.) On the other hand, if you see that you have a one, two, three, four, or a five, then the probability of six became almost 0%. By getting the new information you moved from the "people who rolled a die" group to a "people who rolled six" group or a "people who didn't roll six" group. In the former group, the probability of a six is almost 100%, in the latter group it is almost 0%, and because the latter group contains five times more people, the weighted average remains 1/6. But this is a weighted average for *both* groups, while you are already in *one* specific subgroup.

Similarly a group of patients is composed from a "patients with positive results" group and "patients with negative results" group. The probability of a correct test result, as a weighted average for both groups, is 95%. But we are asking about the probability for the "patients with positive results" group, and the answer may be very different there.

This probably sounds complicated, but in fact it is very easy, if you start your calculation from the correct place. The correct starting place is the part, probably ignored by most readers, saying that the illness infects one person in thousand. (Without giving you this information, how many would realize that it is necessary for a correct calculation?) So the "ill to healthy people" ratio in the population is 1 : 999. Complicated? I hope not.

The next step is updating on the positive test result, on both sides of the ratio. What is the chance of ill person getting a positive (correct) result? 95%. What is the chance of healthy person of getting a positive (incorrect) result? 5%. We multiply each side by the respective number and get 1 × 95% : 999 × 5%. The original values multiplied by conditional probabilities, both on left and right side. After multiplication we get 95 : 4995. Even without further calculation we see that the left side is much smaller than the right side; which means that even after getting the positive result, illness is still less probable than health. Calculation gives us cca 2% : 98%.

How is it possible that a positive result of a 95% correct test gives only 2% probability of the illness? The whole mystery is in the fact that the incidence of the illness is much smaller that the error of the test, therefore a positive results is more probably caused by the error than by the illness.

Another example: In the barrel there are red and yellow apples. Each third red apple is rotten. Each tenth yellow apple is rotten. I pick a random apple from the barrel, and it is rotten. What is the probability it is red?

Calculate for yourself...

A careful reader has noticed that a critical information is missing: what is the ratio of red to yellow apples in the barrel? Without it we cannot solve this problem. For example, if the barrel contained only the red apples (or to satisfy the letter of the original problem, a million red apples and one yellow apple), then the rotten apple is obviously red. On the other hand, if the barrel contained only the yellow apples, then the rotten apple is obviously yellow. And how about other ratios?

If the original ratio of red to yellow apples is 1 : 1, we can multiply both sides by the conditional probability of the apple being rotten, getting 1 × 1/3 : 1 × 1/10 = 1/3 : 1/10 = 10 : 3 = 10/13 : 3/13. Therefore the selected rotten apple is with probability 77% red, and with probability 23% yellow.

However if we had in the barrel thrice as much yellow apples than red ones, the starting ratio would be 1 : 3, and after the multiplication we would get 1 × 1/3 : 3 × 1/10 = 1/3 : 3/10 = 10 : 9 = 10/19 : 9/19. The selected rotten apple would then be with probability 53% red, and with probability 47% yellow.

One more example, now with the complete data: In a school there are 40% boys and 60% girls. All the boys wear trousers; half of the girls wears trousers, and half wears skirts. We see someone with trousers in a distance, is it a boy or a girl? The original ratio is 4 : 6, multiplied by conditional probabilities of wearing trousers it is 4 × 1 : 6 × 1/2 = 4 : 3 = 4/7 : 3/7, so with probability 57% it is a boy, and with probability 43% a girl.

With a bit of practice you could do this calculation, perhaps except the division in the last step, by heart.

By the way, the ratios can contain more than two numbers. If the barrel contains red, yellow, and green apples in ratio 1 : 3 : 2, and if every third yellow apple, every tenth yellow, and every fifth green one is rotten, the ratio of rotten apples is 1 × 1/3 : 3 × 1/10 : 2 : 1/5 = 1/3 : 3/10 : 2/5 = 10 : 9 : 12 = 10/31 : 9/31 : 12/31, which means the selected rotten apple is with probability 32% red, 29% yellow, and 39% green.

Also, if we gradually get more information, we can update the given numbers more times. Imagine a country with many fake coins, where one side comes up twice as often as the other side. Specifically, on 25% of the coins head comes up twice as often as tails; 50% of the coins are fair; and on 25% of the coins tails come up twice as often as head. We flipped a coin repeatedly, getting: head, head, tails, head. What is the probability of the coin being fair?

The original ratio is 1 : 2 : 1. Probability of getting head on the fair coin is 1/2, on the fake one either 2/3 or 1/3, which gives us 1 × 2/3 : 2 × 1/2 : 1 × 1/3. After the second flip we get (1 × 2/3) × 2/3 : (2 × 1/2) × 1/2 : (1 × 1/3) × 1/3, and after all four flips it is 1 × 2/3 × 2/3 × 1/3 × 2/3 : 2 × 1/2 × 1/2 × 1/2 × 1/2 : 1 × 1/3 × 1/3 × 2/3 × 1/3 = 8/81 : 2/16 : 2/81 = 128 : 162 : 32 = cca 40% : 50% : 10%. With probability 50% the coin is fair.

If we flip the coin the fifth time and we get head, we can continue the calculation where we stopped it. 128 × 1/3 : 162 × 1/2 : 32 × 2/3 = 256 : 486 : 128 = 29% : 56% : 15%, which increases the probability of the fair coin to 56%.

But if we get tails on the fifth time, the calculation will continue 128 × 2/3 : 162 × 1/2 : 32 × 1/3 = 512 : 486 : 64 = 48% : 46% : 6%, which decreases the probability of the fair coin to 46%.

The important thing to remember is that the resulting probability is given not only by the conditional probabilities, but also by the prior probability. Otherwise you could one day meet a person claiming to have supernatural abilities and willing to prove it by rolling sixes on two dice... and then really getting the two sixes, which would scientifically prove their supernatural abilities with significance level p = 0.05, which is enough to publish in many scientific journals. I wish I were only joking...

For the readers who understand some statistics and got nervous by the previous text: The trick is in confusing the direction of implication, as I said in the beginning of this article. The significance level p = 0.05 means "if the results are random, the *probability of getting this result* is less than 0.05". It does not mean "if we got this result, the *probability of it being random* is less than 0.05". To correctly calculate the second value we need one more information: a ratio of people with supernatural abilities in the whole population. If this ratio is very small, the given result is more likely to be random. Remember the test detecting the illness with probability 95% and yet wrong in most cases of finding the illness.

After the introductory lectures we split into groups. Anna had the first of her microeconomics exercises, called: "**Applying Fungibility**".

If I tried to make a summary of all the microeconomics exercises, it would be like this: We have many goals we want to reach in our lives, but our resources are limited. We have many options how to change what we have into what we want to have. If we choose wisely, we can get more out of the same starting resources. A wise choice does not mean to ponder endlessly on every detail; the time is also one of the limited resources. Thinking about the rational action should also be done rationally. During these exercises we were discussing frequent situations and our own goals, and we tried to examine them from the economical viewpoint.

The first exercise was about our goals. Obviously, many of us came to the Minicamp wanting to learn how to reach them better. What are our goals? Can we imagine and them describe them specifically, or is there something missing? Do we have a good feeling from them, or is there an internal conflict? Which goals are we likely to reach in a month, in a year, later?

Some of the goals we pursue for themselves. The remaining ones are tools to reach further goals. In that case it is worth saying what exactly do we expect from the given plan... and what *other* ways do we have to reach the same thing.

Sometimes we expect multiple consequences from the single plan. But even then we can list the individual expected gains, and try to find an alternative for each one of them. Sometimes the plan has both wanted and unwanted consequences. That is just one more reason to consider the alternative plans for the wanted ones.

(And now, if you were at the Minicamp instead of just reading a web page, it would be the right moment to take your pen and paper, think, write a list of your goals for the following few weeks, choose one or two of them, and list the expected outcomes, and the alternative ways to get them. And then, discuss in the group what you found, and what else could be found. The most of the benefit of this exercise does not come from reading the theory, but from really stopping and thinking about our real choices, and eventually finding something we did not realize before. And the same thing is true for most of the following exercises.)

The following exercise was led by Critch, and the topic was **emotional awareness**. The relation between rationality and emotions is... well, let's say a bit different from the Hollywood movies, where so-called "rational" and "logical" people deny ever having emotions, despite being obviously wrong. Denying or ignoring things with huge influence on human life is not a part of our definition of rationality. The goal of this exercise was to examine experiencing emotions in ourselves and in other.

Different authors provide a bit different lists of *primary* emotions. Here is one of the lists that we share with animals: caring, curiosity, fear, lust, panic, play, rage. Let's add human emotions known in all cultures: disgust, happiness, sadness, surprise.

Every emotion has its specific traits. It would not make sense to say that some emotion is an "opposite" of another emotion. It would be like saying that the foot is an opposite of the hand, or the liver an opposite of the heart (possibly a nice metaphor, but it does not explain much about the liver). How does the emotion feel? Where in the body do we feel it? What sensing (color, sound, temperature, texture...) do we associate with it? What kind of thoughts?

How is the emotion expressed on outside? How do we usually act? What do we say? In which specific situations are we likely to feel it? Which of these things can we notice in other people?

For people trying to become more rational, *curiosity* has a special importance. An often ignored emotion; many people would forget to add it to the list of emotions, some would argue that it does not belong there.

Complex emotions include a combination of the primary emotions. They are, for example: amusement, contempt, contentment, disappointment, embarrassment, envy, excitement, frustration, guilt, irritation, nervousness, optimism, pity, pride, relief, satisfaction, self-pity, shame, sympathy. We can ask about them the same questions; plus a question about their components.

*to be continued...*

viliam@bur.sk