Here.

# probability

# Some Thoughts about Probability

**REVISED 02/09/15 WITH AN ADDENDUM AT THE END OF THE POST**

This post is prompted by a reader’s comments about “The Compleat Monty Hall Problem.” I open with a discussion of probability and its inapplicability to single games of chance (e.g., one toss of a coin). With that as background, I then address the reader’s specific comments. I close with a discussion of the debasement of the meaning of probability.

**INTRODUCTORY REMARKS**

What is probability? Is it a property of a thing (e.g., a coin), a property of an event involving a thing (e.g., a toss of the coin), or a description of the average outcome of a large number of such events (e.g., “heads” and “tails” will come up about the same number of times)? I take the third view.

What does it mean to say, for example, that there’s a probability of 0.5 (50 percent) that a tossed coin will come up “heads” (H), and a probability of 0.5 that it will come up “tails” (T)? Does such a statement have any bearing on the outcome of a single toss of a coin? No, it doesn’t. The statement is only a short way of saying that in a sufficiently large number of tosses, approximately half will come up H and half will come up T. The result of each toss, however, is a random event — it has no probability.

That is the standard, frequentist interpretation of probability, to which I subscribe. It replaced the classical interpretation , which is problematic:

If a random experiment can result in N mutually exclusive and equally likely outcomes and if N

_{A}of these outcomes result in the occurrence of the event A, theprobability of Ais defined by

- .
There are two clear limitations to the classical definition.

^{[16]}Firstly, it is applicable only to situations in which there is only a ‘finite’ number of possible outcomes. But some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity….

A similar charge has been laid against frequentism:

It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular.

Not so:

- There is no “real probability.” If there were, the classical theory would measure it, but the classical theory is circular, as explained above.

- It is therefore meaningless to refer to “error of measurement.” Estimates of probability may well vary from one series of trials to another. But they will “tend to a fixed limit” over many trials (see below).

There are other approaches to probability. See, for example, this, this, and this.) One approach is known as propensity probability:

Propensities are not relative frequencies, but purported

causesof the observed stable relative frequencies. Propensities are invoked toexplain whyrepeating a certain kind of experiment will generate a given outcome type at a persistent rate. A central aspect of this explanation is the law of large numbers. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will (with high probability) be close to the probability of heads on each single toss. This law suggests that stable long-run frequencies are a manifestation of invariantsingle-caseprobabilities.

This is circular. You observe the relative frequencies of outcomes and, lo and behold, you have found the “propensity” that yields those relative frequencies.

Another approach is Bayesian probability:

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

Or consider this account:

The Bayesian approach to learning is based on the subjective interpretation of probability. The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p….

“Level of certainty” and “subjective interpretation” mean “guess.” The guess may be “educated.” It’s well known, for example, that a balanced coin will come up heads about half the time, in the long run. But to say that “I’m 50-percent confident that the coin will come up heads” is to say nothing meaningful about the outcome of a single coin toss. There are as many probable outcomes of a coin toss as there are bystanders who are willing to make a statement like “I’m x-percent confident that the coin will come up heads.” Which means that a single toss doesn’t have * a* probability, though it can be the subject of many opinions as to the outcome.

Returning to reality, Richard von Mises eloquently explains frequentism in *Probability, Statistics and Truth* (second revised English edition, 1957). Here are some excerpts:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations. [P. 11]

* * *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the game of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. [P. 11]

* * *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute in a given collective’. [Pp. 11-12]

* * *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attribute would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective. [P. 15, emphasis in the original]

* * *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? [P. 33, emphasis added]

As stated earlier, it is simply meaningless to say that the probability of H or T coming up in a single toss is 0.5. Here’s the proper way of putting it: There is no reason to expect a single coin toss to have a particular outcome (H or T), given that the coin is balanced, the toss isn’t made is such a way as to favor H or T, and there are no other factors that might push the outcome toward H or T. But to say that P(H) is 0.5 for a single toss is to misrepresent the meaning of probability, and to assert something meaningless about a single toss.

If you believe that probabilities attach to a single event, you must also believe that a single event has an expected value. Let’s say, for example, that you’re invited to toss a coin * once*, for money. You get $1 if H comes up; you pay $1 if T comes up. As a believer in single-event probabilities, you “know” that you have a “50-50 chance” of winning or losing. Would you play a single game, which has an expected value of $0? If you would, it wouldn’t be because of the expected value of the game; it would be because you might win $1, and because losing $1 would mean little to you.

Now, change the bet from $1 to $1,000. The “expected value” of the single game remains the same: $0. But the size of the stake wonderfully concentrates your mind. You suddenly see through the “expected value” of the game. You are struck by the unavoidable fact that what really matters is the prospect of winning $1,000 or losing $1,000, because those are * the only possible outcomes*.

Your decision about playing a single game for $1,000 will depend on your finances (e.g., you may be very wealthy or very desperate for money) and your tolerance for risk (e.g., you may be averse to risk-taking or addicted to it). But — if you are rational — you will not make your decision on the basis of the fictional expected value of a single game, which derives from the fictional single-game probabilities of H and T. You will decide whether you’re willing and able to risk the loss of $1,000.

Do I mean to say that probability is irrelevant to a single play of the Monty Hall problem, or to a choice between games of chance? If you’re a proponent of propensity, you might say that in the Monty Hall game the prize has a propensity to be behind the other unopened door (i.e., the door not chosen by you and not opened by the host). But does that tell you anything about the actual location of the prize in a particular game? No, because the “propensity” merely reflects the outcomes of many games; it says nothing about a single game, which (like Schrödinger’s cat) can have only a single outcome (prize or no prize), not 2/3 of one.

If you’re a proponent of Bayesian probability, you might say that you’re confident with “probability” 2/3 that the prize is behind the other unopened door. But that’s just another way of saying that contestants win 2/3 of the time if they always switch doors. That’s the background knowledge that you bring to your statement of confidence. But someone who’s ignorant of the Monty Hall problem might be confident with 1/2 “probability” that the prize is behind the other unopened door. And he could be right about a particular game, despite his lower level of confidence.

So, yes, I do mean to say that there’s no such thing as a single-case probability. You may have an opinion ( or a hunch or a guess) about the outcome of a single game, but it’s only your opinion (hunch, guess). In the end, you have to bet on a discrete outcome. If it gives you comfort to switch to the unopened door because that’s the winning door 2/3 of the time (according to classical probability) and about 2/3 of the time (according to the frequentist interpretation), be my guest. I might do the same thing, for the same reason: to be comfortable about my guess. But I’d be able separate my psychological need for comfort from the reality of the situation:

A single game is just one event in the long series of events from which probabilities emerge. I can win the Monty Hall game about 2/3 of the time in repeated plays if I always switch doors. But that probability has nothing to do with ** a** single game, the outcome of which is a random occurrence.

**REPLIES TO A READER’S COMMENTS**

I now turn to the reader’s specific comments, which refer to “The Compleat Monty Hall Problem.” (You should read it before continuing with this post if you’re unfamiliar with the Monty Hall problem or my analysis of it.) The reader’s comments — which I’ve rearranged slightly — are in italic type. (Here and there, I’ve elaborated on the reader’s comments; my elaborations are placed in brackets and set in roman type.) My replies are in bold type.

*I find puzzling your statement that a probability cannot “describe” a single instance, eg one round of the Monty Hall problem.*

**See my introductory remarks.**

*While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances. That is the beauty of probability.
*

**The long-run result doesn’t “prove” the probability of a particular outcome; it determines the relative frequency of occurrence of that outcome — and nothing more. There is no probability associated with a “smaller number of instances,” certainly not 1 instance. Again, see my introductory remarks.**

*If the *[Monty Hall]* game is played once *[and I don’t switch doors]*, I should budget for one car *[the prize that’s usually cited in discussions of the Monty hall problem]*, and if it is played 100 times *[and I never switch doors]*, I budget for 33….*

**“Budget” seems to refer to the expected number of cars won, given the number of plays of the game and a strategy of never switching doors. The reader contradicts himself by “budgeting” for 1 car in a single play of the Monty Hall problem. In doing so, he is being unfaithful to his earlier statement: “While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances.” Removing the double negatives, we get “probability may be assigned to a smaller number of instances.” Given that 1 is a smaller number than 100, it follows, by the reader’s logic, that his “budget” for a single game should be 1/3 car (assuming, as he does, a strategy of not switching doors). The reader’s problem here is his insistence that a probability expresses something other than the long-run relative frequency of a particular outcome.**

*To justify your contrary view, you ask how you can win 2/3 of a car *[the long-run average if the contestant plays many games and always switches doors]*; you can win or you can not win, you say, you cannot partly win. Is this not sophistry or a straw man, sloppy reasoning at best, to convince uncritical thinkers who agree that you cannot drive 2/3 of a car?*

**My “contrary view” of what? My view of statistics isn’t “contrary.” Rather, it’s in line with the standard, frequentist interpretation.**

**It’s a simple statement of obvious fact you can’t win 2/3 of a car. There’s no “sophistry” or “straw man” about it. If you can’t win 2/3 of a car, what does it mean to assign a probability of 2/3 to winning a car by adopting the switching strategy? As discussed above, it means only one thing: A long series of games will be won about 2/3 of the time if all contestants adopt the switching strategy.
**

*On what basis other than an understanding of probability would you be optimistic at the prospect of being offered one chance of picking a single Golden Ball worth $1m from a bag of just three balls and pessimistic about your prospects of picking the sole Golden Ball from a barrel of 10,000 balls?*

*The only difference between the two games is that on the one hand you have a decent (33%) chance of winning and on the other hand you have a lousy (0.01%) chance. Isn’t it these disparate probabilities that give you cause for optimism or pessimism, as the case may be?*

**“Optimism” and “pessimism” — like “comfort” — are subjective terms for ill-defined states of mind. There are persons who will be “optimistic” about a given situation, and persons who will be “pessimistic” about the same situation. For example: There are hundreds of millions of persons who are “optimistic” about winning various lotteries, even though they know that the grand prize in each lottery will be assigned to only one of millions of possible numbers. By the same token, there are hundreds of millions of persons who, knowing the same facts, refuse to buy lottery tickets because they are “pessimistic” about the likely outcome of doing so. But “optimism” and “pessimism” — like “comfort” — have nothing to do with probability, which isn’t an attribute of a single game.
**

*If probability cannot describe the chances of each of the two one-off “games”, does that mean I could not provide a mathematical basis for my advice that you play the game with 3 balls (because you have a one-in-three chance of winning) rather than the ball in the barrel game which offers a one in ten thousand chance of winning?*

**You can provide a mathematical basis for preferring the game with 3 balls. But you must, in honesty, state that the mathematical basis applies only to many games, and that the outcome of a single game is unpredictable.**

*It might be that probability cannot reliably describe the actual outcome of a single event because the sample size of 1 game is too small to reflect the long-run average that proves the probability. However, comparing the probabilities for winning the two games describes the relative likelihood of winning each game and informs us as to which game will more likely provide the prize.*

*If not by comparing the probability of winning each game, how do we know which of the two games has a better chance of delivering a win? One cannot compare the probability of selecting the Golden Ball from each of the two games unless the probability of each game can be expressed, or described, as you say.*

**Here, the reader comes close to admitting that a probability can’t describe the (expected) outcome of a single event (“reliably” is superfluous). But he goes off course when he says that “comparing the probabilities for the two games … informs us as to which game will more likely provide the prize.” That statement is true only for many plays of the two ball games. It has nothing to do with a single play of either ball game. The choice there must be based on subjective considerations: “optimism,” “pessimism,” “comfort,” a guess, a hunch, etc.
**

*Can I not tell a smoker that their lifetime risk of developing lung cancer is 23% even though smokers either get lung cancer or they do not? No one gets 23% cancer. Did someone say they did? No one has 0.2 of a child either but, on average, every family in a census did at one stage have 2.2 children.*

**No, the reader may not (honestly) tell a smoker that his lifetime risk of developing lung cancer is 23 percent, or any specific percentage. The smoker has one life to live; he will either get lung cancer or he will not. What the reader may honestly tell the smoker is that statistics based on the fates of a large number of smokers over many decades indicate that a certain percentage of those smokers contracted lung cancer. The reader should also tell the smoker that the frequency of the incidence of lung cancer in a large population varies according to the number of cigarettes smoked daily. (According to Wikipedia: “For every 3–4 million cigarettes smoked, one lung cancer death occurs.^{[1]}^{[132]}“) Further, the reader should note that the incidence of lung cancer also varies with the duration of smoking at various rates, and with genetic and environmental factors that vary from person to person.**

**As for family size, given that the census counts only post-natal children (who come in integer values), how could “every family in a census … at one stage have 2.2 children”? The average number of children across a large number of families may be 2.2, but surely the reader knows that “every family” did not somehow have 2.2 children “at one stage.” And surely the reader knows that average family size isn’t a probabilistic value, one that measures the relative frequency of an event (e.g., “heads”) given many repetitions of the same trial (e.g., tossing a fair coin), under the same conditions (e.g., no wind blowing). Each event is a random occurrence within the long string of repetitions. The reader may have noticed that family size is in fact strongly determined (especially in Western countries) by non-random events (e.g., deliberate decisions by couples to reproduce, or not). In sum, probabilities may represent averages, but not all (or very many) averages represent probabilities.**

*If not *[by comparing probabilities]*, how do we make a rational recommendation and justify it in terms the board of a think-tank would accept? *[This seems to be a reference to my erstwhile position as an officer of a defense think-tank.]

**Here, the reader extends an inappropriate single-event view of probability to an inappropriate unique-event view. I would not have gone before the board and recommended a course of action — such as bidding on a contract for a new line of work — based on a “probability of success.” That would be an absurd statement to make about an event that is defined by unique circumstances (e.g., the composition of the think-tank’s staff at that time, the particular kind of work to be done, the qualifications of prospective competitors’ staffs). I would simply have spelled out the facts and the uncertainties. And if I had a hunch about the likely success or failure of the venture, I would have recommended for or against it, giving specific reasons for my hunch (e.g., the relative expertise of our staff and competitors’ staffs). But it would have been nothing more than a hunch; it wouldn’t have been my (impossible) assessment of the probability of a unique event.**

**Boards (and executives) don’t base decisions on (non-existent) probabilities; they base decisions on unique sets of facts, and on hunches (preferably hunches rooted in knowledge and experience). Those hunches may sometimes be stated as probabilities, as in “We’ve got a 50-50 chance of winning the contract.” (Though I would never say such a thing.) But such statements are only idiomatic, and have nothing to do with probability as it is properly understood.
**

**CLOSING THOUGHTS**

The reader’s comments reflect the popular debasement of the meaning of probability. The word has been adapted to many inappropriate uses: the probability of precipitation (a quasi-subjective concept), the probability of success in a business venture (a concept that requires the repetition of unrepeatable events), the probability that a batter will get a hit in his next at-bat (ditto, given the many unique conditions that attend every at-bat), and on and on. The effect of all such uses (and, often, the purpose of such uses) is to make a guess seem like a “scientific” prediction.

**ADDENDUM (02/09/15)**

The reader whose comments about “The Compleat Monty Hall Problem” I address above has submitted some additional comments.

The first additional comment pertains to this exchange where the reader’s remarks are in italics and my reply is in bold:

*If the *[Monty Hall]* game is played once *[and I don’t switch doors]*, I should budget for one car *[the prize that’s usually cited in discussions of the Monty hall problem]*, and if it is played 100 times *[and I never switch doors]*, I budget for 33….*

**“Budget” seems to refer to the expected number of cars won, given the number of plays of the game and a strategy of never switching doors. The reader contradicts himself by “budgeting” for 1 car in a single play of the Monty Hall problem. In doing so, he is being unfaithful to his earlier statement: “While the long run result serves to prove the probability of a particular outcome, that does not mean that that probability may not be assigned to a smaller number of instances.” Removing the double negatives, we get “probability may be assigned to a smaller number of instances.” Given that 1 is a smaller number than 100, it follows, by the reader’s logic, that his “budget” for a single game should be 1/3 car (assuming, as he does, a strategy of not switching doors). The reader’s problem here is his insistence that a probability expresses something other than the long-run relative frequency of a particular outcome.**

The reader’s rejoinder (with light editing by me):

The game-show producer “budgets” one car if playing just one game because losing one car is a possible outcome and a prudent game-show producer would cover all possibilities, the loss of one car being one of them. This does not contradict anything I have said, it is simply the necessary approach to manage the risk of having a winning contestant and no car. Similarly, the producer budgets 33 cars for a 100-show season [TEA: For consistency, “show” should be “game”].

The contradiction is between the reader’s use of an expected-value calculation for 100 games, but not for a single game. If the game-show producer knows (how?) that contestants will invariably stay with the doors they’ve chosen initially, a reasonable budget for a 100-game season is 33 cars. (But a “reasonable” budget isn’t a foolproof one, as I show below.) By the same token, a reasonable budget for a single game — a game played only once, not one game in a series — is 1/3 car. After all, that is the probabilistic outcome of a single game if you believe that a probability can be assigned to a single game. And the reader does believe that; here’s the first sentence of his original comments:

I find puzzling your statement that a probability cannot “describe” a single instance, eg one round of the Monty Hall problem. [See the section “Replies to a Reader’s Comments” in “Some Thoughts about Probability.”]

Thus, according the reader’s view of probability, the game-show producer should budget for 1/3 car. After all, in the reader’s view, there’s a 2/3 probability that a contestant won’t win a car in a one-game run of the show.

The reader could respond that cars come in units of one. True, but the designation of a car as the prize is arbitrary (and convenient for the reader). The prize could just as well be money — $30,000 for example. If the contestant wins a car in the (rather contrived) one-game run of the show, the producer then (and only then) gives the contestant a check for $30,000. But, by the reader’s logic, the game has two other equally likely outcomes: the contestant loses and the contestant loses. If those outcomes prevail, the producer doesn’t have to write a check. So, the average prize for the one-game run of the show would be $10,000, or the equivalent of 1/3 car.

Now, the producer might hedge his bets because the outcome of a single game is uncertain; that is, he might budget one car or $30,000. But by the same logic, the producer should budget 100 cars or $3,000,000 for 100 games, not 33 cars or $990,000. Again, the reader contradicts himself. He uses an expected-value calculation for one game but not for 100 games.

What is a “reasonable” budget for 100 games, or fewer than 100 games? Well, it’s really a subjective call that the producer must make, based on his tolerance for risk. The producer who budgets 33 cars or $990,000 for 100 games on the basis of an expected-value calculation may find himself without a job.

Using a random-number generator, I set up a simulation of the outcomes of games 1 through 100, where the contestant always stays with the door originally chosen . Here are the results of the first five simulations that I ran:

Some observations:

Outcomes of individual games are unpredictable, as evidenced by the wild swings and jagged lines, the latter of which persist even at 100 games.

Taking the first game as a proxy for a single-game run of the show, we see that the contestant won that game in just one of the five simulations. To put it another way, in four of the five cases the producer would have thrown away his money on a rapidly depreciating asset (a car) if had offered a car as the prize.

Results vary widely and wildly after the first game. At 10 and 20 games, contestants are doing better than expected in four of the five simulations. At 30 games, contestants are doing as well or better than expected in all five simulations. At 100 games, contestants are doing better than expected in two simulation, worse than expected in two simulation, and exactly as expected in one simulation. What should the producer do with such information? Well, it’s up to the producer and his tolerance for risk. But a prudent producer wouldn’t budget 33 cars or $990,000 just because that’s the expected value of 100 games.

It can take a lot of games to yield an outcome that comes close to 1/3 car per game. A game-show producer could easily lose his shirt by “betting” on 1/3 car per game for a season much shorter than 100 games, and even for a season of 100 games.

The reader’s second additional comment pertains to this exchange:

*On what basis other than an understanding of probability would you be optimistic at the prospect of being offered one chance of picking a single Golden Ball worth $1m from a bag of just three balls and pessimistic about your prospects of picking the sole Golden Ball from a barrel of 10,000 balls?*

*The only difference between the two games is that on the one hand you have a decent (33%) chance of winning and on the other hand you have a lousy (0.01%) chance. Isn’t it these disparate probabilities that give you cause for optimism or pessimism, as the case may be?*

**“Optimism” and “pessimism” — like “comfort” — are subjective terms for ill-defined states of mind. There are persons who will be “optimistic” about a given situation, and persons who will be “pessimistic” about the same situation. For example: There are hundreds of millions of persons who are “optimistic” about winning various lotteries, even though they know that the grand prize in each lottery will be assigned to only one of millions of possible numbers. By the same token, there are hundreds of millions of persons who, knowing the same facts, refuse to buy lottery tickets because they are “pessimistic” about the likely outcome of doing so. But “optimism” and “pessimism” — like “comfort” — have nothing to do with probability, which isn’t an attribute of a single game.**

The reader now says this:

Optimism or pessimism are states as much as something being hot or cold, or a person perspiring or not, and one would find a lot more confidence among contestants playing a single instance of a 1-in-3 game than one would find amongst contestants playing a 1-in-1b game.

Apart from differing probabilities, how would you explain the higher levels of confidence among players of the 1-in-3 game?

Alternatively, what about a “game” involving 2 electrical wires, one live and one neutral, and a game involving one billion electrical wires, only one of which is live? Contestants are offered $1m if they are prepared to touch one wire. No one accepts the dare in the 1-in-2 game but a reasonable percentage accept the dare in the 1-in-1b game.

Is the probable outcome of a single event a factor in the different rates of uptake between the two games?

The reader begs the question by introducing “hot or cold” and “perspiring or not,” which have nothing to do with objective probabilities (long-run frequencies of occurrence) and everything to do with individual attitudes toward risk-taking. That was the point of my original response, and I stand by it. The reader simply tries to evade the point by reading the minds of his hypothetical contestants (“higher levels of confidence among players of the 1-in-3 game”). He fails to address the basic issue, which is whether or not there are single-case probabilities — an issue that I addressed at length in “Some Thoughts…”.

The alternative hypotheticals involving electrical wires are just variants of the original one. They add nothing to the discussion.

* * *

Enough of this. If the reader — or anyone else — has some good arguments to make in favor of single-case probabilities, drop me a line. If your thoughts have merit, I may write about them.

# The Compleat Monty Hall Problem

*Wherein your humble blogger gets to the bottom of the Monty Hall problem, sorts out the conflicting solutions, and declares that the standard solution is the right solution, but not to the Monty Hall problem as it’s usually posed.*

**THE MONTY HALL PROBLEM AND THE TWO “SOLUTIONS”**

The Monty Hall problem, first posed as a statistical puzzle in 1975, has been notorious since 1990, when Marilyn vos Savant wrote about it in *Parade*. Her solution to the problem, to which I will come, touched off a controversy that has yet to die down. But her solution is now widely accepted as the correct one; I refer to it here as the standard solution.

This is from the *Wikipedia* entry for the Monty Hall problem:

The

Monty Hall problemis a brain teaser, in the form of a probability puzzle (Gruber, Krauss and others), loosely based on the American television game showLet’s Make a Dealand named after its original host, Monty Hall. The problem was originally posed in a letter by Steve Selvin to theAmerican Statisticianin 1975 (Selvin 1975a), (Selvin 1975b). It became famous as a question from a reader’s letter quoted in Marilyn vos Savant‘s “Ask Marilyn” column inParademagazine in 1990 (vos Savant 1990a):

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Here’s a complete statement of the problem:

1. A contestant sees three doors. Behind one of the doors is a valuable prize, which I’ll denote as **$**. Undesirable or worthless items are behind the other two doors; I’ll denote those items as **x**.

2. The contestant doesn’t know which door conceals **$** and which doors conceal **x**.

3. The contestant chooses a door at random.

4. The host, who knows what’s behind each of the doors, opens one of the doors not chosen by the contestant.

5. The door chosen by the host may not conceal **$**; it must conceal an x. That is, the host always opens a door to reveal an **x**.

6. The host then asks the contestant if he wishes to stay with the door he chose initially (“stay”) or switch to the other unopened door (“switch”).

7. The contestant decides whether to stay or switch.

8. The host then opens the door finally chosen by the contestant.

9. If **$** is revealed, the contestant wins; if **x** is revealed the contestant loses.

One solution (the standard solution) is to switch doors because there’s a 2/3 probability that **$** is hidden behind the unopened door that the contestant didn’t choose initially. In vos Savant’s own words:

Yes; you [the contestant] should switch. The first [initially chosen] door has a 1/3 chance of winning, but the second [other unopened] door has a 2/3 chance.

The other solution (the alternative solution) is indifference. Those who propound this solution maintain that there’s a equal chance of finding **$** behind either of the doors that remain unopened after the host has opened a door.

As it turns out, the standard solution doesn’t tell a contestant what to do in a particular game. But the standard solution does point to the right strategy for someone who plays or bets on a large number of games.

The alternative solution accurately captures the unpredictability of any particular game. But indifference is only a break-even strategy for a person who plays or bets on a large number of games.

**EXPLANATION OF THE STANDARD SOLUTION**

The contestant may choose among three doors, and there are three possible ways of arranging the items behind the doors: **S x x**; **x $ x**; and **x x $**. The result is nine possible ways in which a game may unfold:

Events 1, 5, and 9 each have two branches. But those branches don’t count as separate events. They’re simply subsets of the same event; when the contestant chooses a door that hides **$**, the host must choose between the two doors that hide **x**, but he can’t open both of them. And his choice doesn’t affect the outcome of the event.

It’s evident that switching would pay off with a win in 2/3 of the possible events; whereas, staying with the original choice would off in only 1/3 of the possible events. The fractions 1/3 and 2/3 are usually referred to as probabilities: a 2/3 probability of winning **$** by switching doors, as against a 1/3 probability of winning **$** by staying with the initially chosen door.

Accordingly, proponents of the standard solution — who are now legion — advise the individual (theoretical) contestant to switch. The idea is that switching increases one’s chance (probability) of winning.

**A CLOSER LOOK AT THE STANDARD SOLUTION**

There are three problems with the standard solution:

1. It incorporates a subtle shift in perspective. The Monty Hall problem, as posed, asks what **a** contestant should do. The standard solution, on the other hand, represents the expected (long-run average) outcome of many events, that is, many plays of the game. For reasons I’ll come to, the outcome of a single game can’t be described by a probability.

2. Lists of possibilities, such as those in the diagram above, fail to reflect the randomness inherent in real events.

3. Probabilities emerge from many repetitions of the kinds of events listed above. It is meaningless to ascribe a probability to a single event. In case of the Monty Hall problem, many repetitions of the game will yield probabilities approximating those given in the standard solution, but the outcome of **each** repetition will be unpredictable. It is therefore meaningless to say that **a** contestant has a 2/3 chance of winning **a** game if he switches. A 2/3 chance of winning refers to the expected outcome of many repetitions, where the contestant chooses to switch every time. To put it baldly: How does a person win 2/3 of a game? He either wins or doesn’t win.

Regarding points 2 and 3, I turn to *Probability, Statistics and Truth* (second revised English edition, 1957), by Richard von Mises:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply toe theory of probability we must have a practically unlimited sequence of uniform observations. (p. 11)

* * *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the same of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. (p. 11)

* * *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute [e.g., winning

$rather than x ] in a given collective [a series of attempts to win $ rather thanx]. (pp. 11-12)* * *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attributed would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective [emphasis in the original]. (p. 15)

* * *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? It is important to note that this statement remains valid also if the calculated probability has one of the two extreme values 1 or 0 [emphasis added]. (p. 33)

To bring the point home, here are the results of 50 runs of the Monty Hall problem, where each result represents (i) a random initial choice between Door 1, Door 2, and Door 3; (ii) a random array of **$**, **x**, and **x** behind the three doors; (iii) the opening of a door (other than the one initially chosen) to reveal an **x**; and (iv) a decision, in every case, to switch from the initially chosen door to the other unopened door:

What’s relevant here isn’t the fraction of times that **$** appears, which is 3/5 — slightly less than the theoretical value of 2/3. Just look at the utter randomness of the results. The first three outcomes yield the “expected” ratio of two wins to one loss, though in the real game show the two winners and one loser would have been different persons. The same goes for any sequence, even the final — highly “improbable” (i.e., random) — string of nine straight wins (which would have accrued to nine different contestants). And who knows what would have happened in games 51, 52, etc.

If a person wants to win 2/3 of the time, he must find a game show that allows him to continue playing the game until he has reached his goal. As I’ve found in my simulations, it could take as many as 10, 20, 70, or 300 games before the cumulative fraction of wins per game converges on 2/3.

That’s what it means to win 2/3 of the time. It’s not possible to win a single game 2/3 of the time, which is the “logic” of the standard solution as it’s usually presented.

**WHAT ABOUT THE ALTERNATIVE SOLUTION?**

The alternative solution doesn’t offer a winning strategy. In this view of the Monty Hall problem, it doesn’t matter which unopened door a contestant chooses. In effect, the contestant is advised to flip a coin.

As discussed above, the outcome of any particular game is unpredictable, so a coin flip will do just as well as any other way of choosing a door. But randomly selecting an unopened door isn’t a good strategy for repeated plays of the game. Over the long run, random selection means winning about 1/2 of all games, as opposed to 2/3 for the “switch” strategy. (To see that the expected probability of winning through random selection approaches 1/2, return to the earlier diagram; there, you’ll see that **$** occurs in 9/18 = 1/2 of the possible outcomes for “stay” and “switch” combined.)

Proponents of the alternative solution overlook the importance of the host’s selection of a door to open. His choice isn’t random. Therein lies the secret of the standard solution — as a long-run strategy.

**WHY THE STANDARD SOLUTION WORKS IN THE LONG RUN**

It’s commonly said by proponents of the standard solution that when the host opens a door, he gives away information that the contestant can use to increase his chance of winning that game. One nonsensical version of this explanation goes like this:

- There’s a 2/3 probability that
**$**is behind one of the two doors not chosen initially by the contestant.

- When the host opens a door to reveal
**x**, that 2/3 “collapses” onto the other door that wasn’t chosen initially. (Ooh … a “collapsing” probability. How exotic. Just like Schrödinger’s cat.)

Of course, the host’s action gives away nothing in the context of a single game, the outcome of which is unpredictable. The host’s action *does* help in the long run, if you’re in a position to play or bet on a large number of games. Here’s how:

- The contestant’s initial choice (IC) will be wrong 2/3 of the time. That is, in 2/3 of a large number of games, the
**$**will be behind one of the other two doors.

- Because of the rules of the game, the host must open one of those other two doors (HC1 and HC2); he can’t open IC.

- When IC hides an
**x**(which happens 2/3 of the time), either HC1 and HC2 must conceal the**$**; the one that doesn’t conceal the**$**conceals an**x**.

- The rules require the host to open the door that conceals an
**x**.

- Therefore, about 2/3 of the time the
**$**will be behind HC1 or HC2, and in those cases it will*always*be behind the door (HC1 or HC2) that the host doesn’t open.

- It follows that the contestant, by consistently switching from IC to the remaining unopened door (HC1 or HC2), will win the
**$**about 2/3 of the time.

The host’s action transforms the probability — the long-run frequency — of choosing the winning door from 1/2 to 2/3. But it does so *if and only if* the player or bettor always switches from IC to HC1 or HC2 (whichever one remains unopened).

You can visualize the steps outlined above by looking at the earlier diagram of possible outcomes.

*That’s all there is. There isn’t any more.*

# Something from Nothing?

I do not know if Lawrence Krauss typifies scientists in his logical obtuseness, but he certainly exemplifies the breed of so-called scientists who proclaim atheism as a scientific necessity. According to a review by David Albert of Krauss’s recent book, *A Universe from Nothing*,

the laws of quantum mechanics have in them the makings of a thoroughly scientific and adamantly secular explanation of why there is something rather than nothing.

Albert’s review, which I have quoted extensively elsewhere, comports with Edward Feser’s analysis:

The bulk of the book is devoted to exploring how the energy present in otherwise empty space, together with the laws of physics, might have given rise to the universe as it exists today. This is at first treated as if it were highly relevant to the question of how the universe might have come from nothing—until Krauss acknowledges toward the end of the book that energy, space, and the laws of physics don’t really count as “nothing” after all. Then it is proposed that the laws of physics alone might do the trick—though these too, as he implicitly allows, don’t really count as “nothing” either.

Bill Vallicella puts it this way:

[N]o one can have any objection to a replacement of the old Leibniz question — Why is there something rather than nothing? … — with a physically tractable question, a question of interest to cosmologists and one amenable to a physics solution. Unfortunately, in the paragraph above, Krauss provides two

differentreplacement questions while stating, absurdly, that the second is a more succinct version of the first:K1. How can a physical universe arise from an initial condition in which there are no particles, no space and perhaps no time?

K2. Why is there ‘stuff’ instead of empty space?

These are obviously distinct questions. To answer the first one would have to provide an account of how the universe originated from nothing physical: no particles, no space, and “perhaps” no time. The second question would be easier to answer because it presupposes the existence of space and does not demand that empty space be itself explained.

Clearly, the questions are distinct. But Krauss conflates them. Indeed, he waffles between them, reverting to something like the first question after raising the second. To ask why there is something physical as opposed to nothing physical is quite different from asking why there is physical “stuff” as opposed to empty space.

Several years ago, I explained the futility of attempting to decide the fundamental question of creation and its cause on scientific grounds:

Consider these three categories of knowledge (which long pre-date their use by Secretary of Defense Donald Rumsfeld): known knowns, know unknowns, and unknown unknowns. Here’s how that trichotomy might be applied to a specific aspect of scientific knowledge, namely, Earth’s rotation about the Sun:

1. Known knowns — Earth rotates about the Sun, in accordance with Einstein’s theory of general relativity.

2. Known unknowns — Earth, Sun, and the space between them comprise myriad quantum phenomena (e.g., matter and its interactions of matter in, on, and above the Earth and Sun; the transmission of light from Sun to Earth). We don’t know whether and how quantum phenomena influence Earth’s rotation about the Sun; that is, whether Einsteinian gravity is a partial explanation of a more complete theory of gravity that has been dubbed quantum gravity.

3. Unknown unknowns — Other things might influence Earth’s rotation about the Sun, but we don’t know what those other things are, if there are any.

For the sake of argument, suppose that scientists were as certain about the origin of the universe in the Big Bang as they are about the fact of Earth’s rotation about the Sun. Then, I would write:

1. Known knowns — The universe was created in the Big Bang, and the universe — in the large — has since been “unfolding” in accordance with Einsteinian relativity.

2. Known unknowns — The Big Bang can be thought of as a meta-quantum event, but we don’t know if that event was a manifestation of quantum gravity. (Nor do we know how quantum gravity might be implicated in the subsequent unfolding of the universe.)

3. Unknown unknowns — Other things might have caused the Big Bang, but we don’t know if there were such things or what those other things were — or are.

Thus — to a scientist qua scientist — God and Creation are unknown unknowns because, as unfalsifiable hypotheses, they lie outside the scope of scientific inquiry. Any scientist who pronounces, one way or the other, on the existence of God and the reality of Creation has — for the moment, at least — ceased to be scientist.

Which is not to say that the question of creation is immune to logical analysis; thus:

To say that the world as we know it is the product of chance — and that it may exist only because it is one of vastly many different (but unobservable) worlds resulting from chance — is merely to state a theoretical possibility. Further, it is a possibility that is beyond empirical proof or disproof; it is on a par with science fiction, not with science.

If the world as we know it — our universe — is not the product of chance, what is it? A reasonable answer is found in another post of mine, “Existence and Creation.” Here is the succinct version:

- In the material universe, cause precedes effect.
- Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
- The existence of the universe therefore implies a separate, uncaused cause.
There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

Another blogger once said this about the final sentence of that quotation, which I lifted from another post of mine:

I would have to disagree with the last sentence. The problem is epistemology — how do we know what we know? Atheists, especially ‘scientistic’ atheists, take the position that the modern scientific methodology of observation, measurement, and extrapolation from observation and measurement, is sufficient to detect anything that Really Exists — and that the burden of proof is on those who propose that something Really Exists that cannot be reliably observed and measured; which is of course impossible within that mental framework. They have plenty of logic and science on their side, and their ‘evidence’ is the commonly-accepted maxim that it is impossible to prove a negative.

I agree that the problem of drawing conclusions about creation from science (as opposed to logic) is epistemological. The truth and nature of creation is an “unknown unknown” or, more accurately, an “unknowable unknown.” With regard to such questions, scientists do not have logic and science on their side when they asset that the existence of the universe is possible without a creator, as a matter of science (as Krauss does, for example). Moreover, it is scientists who are trying to prove a negative: that there is neither a creator nor the logical necessity of one.

“Something from nothing” is possible, but only if there is a creator who is not part of the “something” that is the proper subject of scientific exploration and explanation.

Related posts:

Atheism, Religion, and Science

The Limits of Science

Three Perspectives on Life: A Parable

Beware of Irrational Atheism

The Creation Model

The Thing about Science

Evolution and Religion

Words of Caution for Scientific Dogmatists

Science, Evolution, Religion, and Liberty

The Legality of Teaching Intelligent Design

Science, Logic, and God

Capitalism, Liberty, and Christianity

Is “Nothing” Possible?

Debunking “Scientific Objectivity”

Science’s Anti-Scientific Bent

Science, Axioms, and Economics

The Big Bang and Atheism

The Universe . . . Four Possibilities

Einstein, Science, and God

Atheism, Religion, and Science Redux

Pascal’s Wager, Morality, and the State

Evolution as God?

The Greatest Mystery

What Is Truth?

The Improbability of Us

A Digression about Probability and Existence

More about Probability and Existence

Existence and Creation

Probability, Existence, and Creation

The Atheism of the Gaps

# Probability, Existence, and Creation

A point that I make in “More about Probability and Existence” is made more eloquently and succinctly by Jacques Maritain:

To attempt to demonstrate that the world can be the effect of chance by beginning with the presupposition of this very possibility is to become the victim of a patent sophism or a gross illusion. In order to have the right to apply the calculus of probabilities to the case of the formation of the world, it would be necessary first to have established that the world can be the effect of chance. (

Approaches to God, Macmillan paperback edition, pp. 60-1.)

To say that the world as we know it is the product of chance — and that it may exist only because it is one of vastly many different (but unobservable) worlds resulting from chance — is merely to state a theoretical possibility. Further, it is a possibility that is beyond empirical proof or disproof; it is on a par with science fiction, not with science.

If the world as we know it — our universe — is not the product of chance, what is it? A reasonable answer is found in another post of mine, “Existence and Creation.” Here is the succinct version:

- In the material universe, cause precedes effect.
- Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
- The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

**UPDATE 01/11/16:**

Philosopher Gary Gutting writes:

The idea of a cosmological argument is to move from certain known effects to God as their cause. To construct such an argument, we need a

principle of causality:a statement of which sorts of things need causes to explain them. The simplest such principle would be:everything has a cause.But this is too strong a claim, since if everything has a cause, then God will have a cause and so be dependent on something else, which would, therefore, have a better claim to be God. A cosmological argument will work only if we have a causal principle that will not apply to God….A cosmological argument is an effort to carry the search for an explanation as far as it can go, to see if we can discover not just an explanation of some single thing but an explanation of everything—for, we might say, the world (

kosmosin Greek) as a whole. Let’s call this anultimate explanation.We want, therefore, an argument that will show that God is the ultimate explanation. Perhaps, then, the causal principle we need is thatthere must be an ultimate explanation (provided by an ultimate cause).Now, however, we need to think more carefully about what an ultimate explanation would explain. We’ve said it’s an explanation of everything, but just what does this mean? Something that needs explanation is, by definition, not self-explanatory. It needs to be explained by something other than itself. As we’ve seen, if we sought an explanation of literally

everything, then there would be nothing available to provide the explanation.If there is to be an ultimate explanation, then, it must be something that itself requires no explanation but explains everything else. The world that the cosmological argument is trying to explain must not be

everythingbuteverything that needs an explanation.But what things require explanation?One plausible answer is that we must explain those things that do exist butmight not exist,things that, to use the traditional technical term, arecontingent….Correspondingly, for the cosmological argument to work, the explanation of everything contingent must be something that isnot contingent;namely, something that not only exists but also cannotnotexist; it must, that is, benecessary. If it weren’t necessary, it would be contingent and so itself in need of explanation. (Notice that what is necessary is not contingent, and vice versa.) Simply put, the God the cosmological argument wants to prove exists has to be a necessary, not a contingent, being.Here, then, we move to a still better principle of causality: that

every contingent thing requires a cause. But we still need to be careful. Most contingent things can be explained by other contingent things. The world (the totality of contingent things) is a complex explanatory system….If this makes sense, the cosmological argument can’t get off the ground because, as we’ve seen, its God is a necessary being that’s needed to explain what contingent things can’t….What does this mean for our effort to construct a cosmological argument? It means that our argument must deny that there is an

infinite regressof contingent things that explains everything that needs explaining. Otherwise, there’s no need for a necessary God.This is a crucial stage in our search for a cosmological argument. We have a plausible principle of causality: any contingent being needs a cause. We now see that we need another premise: that

an infinite regress of contingent things cannot explain everything that needs explaining….We can agree that there might be an infinite series of contingent explainers but still maintain that

such an infinite series itself needs an explanation.We might, in effect, grant that there could be an infinite series of tortoises, each supporting the other—and the whole chain supporting the Earth—but still insist that there must be some explanation for why all those tortoises exist. That is, our argument will require that an infinite regress of contingent things mustitselfhave an explanation. This gives us the two key premises of our cosmological argument: a principle of causality and a principle for excluding an infinite regress.Now we can formulate our argument:

- There are contingent beings.
- The existence of any contingent being has an explanation.
- Such an explanation must be provided by either a necessary being or by an infinite regress of contingent beings.
- An explanation by means of an infinite regress of contingent beings is itself in need of an explanation by a necessary being.
- Therefore, there is a necessary being that explains the existence of contingent beings.
This argument is logically valid; that is, if the premises are true, then the conclusion is true…. [“Can We Prove That God Exists? Richard Dawkins and the Limits of Faith and Atheism,”

Salon, November 29, 2015]

That looks like my argument.

Related posts:

Atheism, Religion, and Science

The Limits of Science

Three Perspectives on Life: A Parable

Beware of Irrational Atheism

The Creation Model

The Thing about Science

Evolution and Religion

Words of Caution for Scientific Dogmatists

Science, Evolution, Religion, and Liberty

The Legality of Teaching Intelligent Design

Science, Logic, and God

Capitalism, Liberty, and Christianity

Is “Nothing” Possible?

Debunking “Scientific Objectivity”

Science’s Anti-Scientific Bent

Science, Axioms, and Economics

The Big Bang and Atheism

The Universe . . . Four Possibilities

Einstein, Science, and God

Atheism, Religion, and Science Redux

Pascal’s Wager, Morality, and the State

Evolution as God?

The Greatest Mystery

What Is Truth?

The Improbability of Us

A Digression about Probability and Existence

More about Probability and Existence

Existence and Creation

# More about Probability and Existence

In “A Digression about Probability and Existence” I address

the view that there is life as we know it — an outcome with a low, prior probability given the (theoretical) multitude of possible configurations of the universe — only because there are vastly many actual or possible universes with vastly many configurations.

I observe that

[i]n this view, life as we know it is an improbable phenomenon that we are able to witness only because we happen to exist in one of the multitude of possible or actual universes.

I should have pointed out that it is impossible to know whether life as we know it is a low-probability event. Such a conclusion rests on an unsupportable assumption: the existence of a universe which is “fine tuned” to enable life is a low-probability event. And yet, that assumption is the basis for assertions that the existence of our universe — with its life-supporting combination of matter, energy, and physical laws — “proves” that there must be other universes because ours is so unlikely. Such “logic” is an edifice of rank circularity constructed on a foundation of pure supposition.

Such “logic,” moreover, misapplies the concept “probability.” No object or event has a probability (knowable chance of happening) unless it meets the following conditions:

1. The object or event is a member of a collective of observable phenomena, where every member of the collective has common features.

2. The collective is a mass phenomenon or an unlimited sequence of observations, where (a) the relative frequencies of particular attributes within the collective ** tend** to fixed limits and (b) these fixed limits remain the same for reasonably large subsets of the collective. (Adapted from “Summary of the Definition,” on pp. 28-9 in Chapter 1, “The Definition of Probability,” of Richard von Mises’s

*Probability, Statistics and Truth*, 1957 Dover edition.)

Mises, obviously, was a “frequentist,” and his view of probability is known as “frequentism.” Despite the criticisms of frequentism (follow the preceding link), it offers the only rigorous view of probability. Nor does it insist (as suggested at the link) that a probability is a precisely knowable or fixed value. But it is a quantifiable value, based on ** observations of actual objects or events**.

Other approaches to probability are vague and subjective. There are, for example, degrees of belief (probabilistic logic), statements of propensity (probabilistic propensity), and “priors” (Bayesian probability). Unlike frequentism, these appeal to speculation, impressions, and preconceptions. Reliance on such notions of probability as evidence of the actual likelihood of an event is the quintessence of circularity.

In summary, there is no sound basis in logic or empirical science for the assertion that the universe we know is a highly improbable one and, therefore must be one of vastly many universes — if it was not the conscious creation of an exogenous force or being (i.e., God). The universe we know simply “is” — and that is all we know or probably can know, as a matter of science.

Related posts:

Atheism, Religion, and Science

The Limits of Science

Three Perspectives on Life: A Parable

Beware of Irrational Atheism

The Creation Model

The Thing about Science

Evolution and Religion

Words of Caution for Scientific Dogmatists

Science, Evolution, Religion, and Liberty

The Legality of Teaching Intelligent Design

Science, Logic, and God

Capitalism, Liberty, and Christianity

Is “Nothing” Possible?

A Dissonant Vision

Debunking “Scientific Objectivity”

Science’s Anti-Scientific Bent

Science, Axioms, and Economics

The Big Bang and Atheism

The Universe . . . Four Possibilities

Einstein, Science, and God

Atheism, Religion, and Science Redux

Pascal’s Wager, Morality, and the State

Evolution as God?

The Greatest Mystery

What Is Truth?

The Improbability of Us

A Digression about Probability and Existence

# A Digression about Probability and Existence

The probability of an event can be the probability that it will (or could) happen, or the relative occurrence of the event as that occurrence is observed in nature or experiment.

It is known, for example, that the following prior probabilities attach to the outcome of a single roll of a pair of fair dice:

And if a single person rolled a pair of dice 1,000,000 times or 1,000,000 persons each rolled a pair of dice once, the result would be close (but not necessarily identical) to this:

But the fact that a single player, on a single roll, throws a 2, 5, 9, 12, or any other sum tells us nothing about the number of rolls the player has made or will make, nor does it tell us anything about the number of players who may have rolled dice at the same moment. All it tells us is that the player has attained a particular result on that particular roll of the dice.

This is at odds with the view that there is life as we know it — an outcome with a low, prior probability given the (theoretical) multitude of possible configurations of the universe — only because there are vastly many actual or possible universes with vastly many configurations. In this view, life as we know it is an improbable phenomenon that we are able to witness only because we happen to exist in one of the multitude of possible or actual universes.

So, is our universe and the life that has arisen in it a consequence of design, or is it all a matter of “luck”? And if it is due to “luck,” what created the material of which the universe is made and what determined the laws that are evident in the actions of that material?

Related posts:

Atheism, Religion, and Science

The Limits of Science

Three Perspectives on Life: A Parable

Beware of Irrational Atheism

The Creation Model

The Thing about Science

Evolution and Religion

Words of Caution for Scientific Dogmatists

Science, Evolution, Religion, and Liberty

The Legality of Teaching Intelligent Design

Science, Logic, and God

Capitalism, Liberty, and Christianity

Is “Nothing” Possible?

A Dissonant Vision

Debunking “Scientific Objectivity”

Science’s Anti-Scientific Bent

Science, Axioms, and Economics

The Big Bang and Atheism

The Universe . . . Four Possibilities

Einstein, Science, and God

Atheism, Religion, and Science Redux

Pascal’s Wager, Morality, and the State

Evolution as God?

The Greatest Mystery

What Is Truth?

The Improbability of Us