Home

Background

Blaise Pascal

Probability Theory

Pascal's Triangle

Probability Theory

Introduction

Pascal's Triangle & Probability

Application of Probability Theory

Probability Quotes

Glossary

Gambling on God

Pascal's wager

1st 2nd  3rd Argument

Conclusion

Alternative Formulation

Decision Theory 

Rationality

Objections

Many Gods Objection

Intellectualist Objection

Moral Objection

Inappropriate Argument

InappropriateProbability

Nature of God

Logic  Decision Matrix

Link

Homework

Problemset 1

Problemset 2

Problemset 2(HTML)

Problemset 3

Spreadsheet

Quotes

Email

Comment 

 Joyce Lam Nga Ching

 2001714828

 Phil1007

12-4-2002

27-4-2002

 

  

Probability Theory

Background


PROBABILITY THEORY: THE LOGIC OF SCIENCE 

1. Brief History of Probability

       Before the theory of probability was formed Gambling was popular. Gamblers were crafty enough to figure simple laws of probability by witnessing the events at first hand. The opportunity was limitless in then exploiting the often complex and sometimes seemingly contradictory laws of probability.

       Concepts of probability have been around for thousands of years, but probability theory did not arise as a branch of mathematics until the mid-seventeenth century. During the fifteenth century several probability works emerged.  Calculations of probabilities became more noticeable during this time period even though mathematicians in Italy and France remained unfamiliar with these calculation methods.

        The probability theory was developed in the 16th and 17th century largely in response to gambling questions, and conceivably this type of question has been of interest to some people for many hundreds of years.

              In the seventeenth century Galileo wrote down some ideas about dice games. This led to discussions and  papers which formed the earlier parts of probability theory. The dice problem asks how many times one must throw a pair of dice before one expects a double six while the problem of points asks how to divide the stakes if a game of dice is incomplete. They solved the problem of points for a two player game but did not develop powerful enough mathematical methods to solve it for three or more players. 

                In the mid-seventeenth century, a simple question directed to Blaise Pascal by a nobleman sparked the birth of probability theory, as we know it today.  Chevalier de Méré gambled frequently to increase his wealth.  He bet on a roll of a die that at least one 6 would appear during a total of four rolls.  From past experience, he knew that he was more successful than not with this game of chance.  Tired of his approach, he decided to change the game.  He bet that he would get a total of 12, or a double 6, on twenty-four rolls of two dice. Soon he realized that his old approach to the game resulted in more money.  He asked his friend Blaise Pascal why his new approach was not as profitable. He found that the probability of winning using the new approach was only 49.1 percent compared to 51.8 percent using the old approach.

                This problem and others posed by de Méré  is said be the start of famous correspondence between Pascal and Pierre de Fermat.  They continued to exchange their thoughts on mathematical principles and problems through a series of letters. Historians think that the first letters written were associated with the above problem and other problems dealing with probability theory.  To solve the problems, Fermat used combinatorial analysis (determination of the number of possible outcomes in ideal games of chance by computing permutation and combination numbers ) and Pascal used reasoning by recursions ( an iterative process which determines the result of the next case by the present case ).The fundamental principles of probability theory were formulated for the first time. 

            Although a few special problems on games of chance had been solved by some Italian mathematicians in the 15th and 16th centuries, no general theory was developed before this famous correspondence.  Therefore, Pascal and Fermat are the mathematicians credited with the founding of probability theory. There was a great development in the understanding of probability during the 17th century.

"The excitement that a gambler feels when making a bet is equal to the amount he might win times the probability of winning it. - Pascal "

         Even in the 21st century we still use gaming metaphors to try to explain elementary probability theory which is why so many of the examples here refer to dice, playing cards and coins.

2.Detailed Solutions of Chevalier de Méré’s Games of Chance

          Chevalier de Méré’s first game involved taking a chance that at least one 6 would appear during a total of four rolls of one die.

Solution:  To find the probability of getting at least one 6 in four rolls, one would have to calculate the probability of getting one 6, two 6’s, three 6’s, and four 6’s.  Actually, it would be quicker to find the opposite of this event, called the complement.  This is the probability of not getting any 6’s.  Since the probability of not rolling a 6 is five out of six for each of the four rolls, this probability is (5/6)4.  To solve the game you need to subtract this probability from 1, which is 51.8 percent: 1-(5/6)4 Or 51.8 percent

        Chevalier de Méré’s second game involved taking a chance that he would roll a total of 12, or a double 6, once in 24 rolls of two dice.

Solution:  Instead of finding the probability of getting a total of 12, or a double 6, in 24 rolls of two dice, Pascal found the probability of de Méré losing, or not rolling a double 6.  Since there are 36 possible rolls of two dice and only one way to roll a double 6, then the probability of not rolling a double six is 35/36.  He rolled 24 times.  The probability that he loses the game is: (35/36)24 or 50.9 percent

  To compare to the first game, one must calculate the probability that he wins in this game.  This is:1 - (35/36)24or 49.1 percent

3. Terminology

        We can analyze an experiment to determine exactly what outcomes are possible, and to obtain exact values for the probabilities. 

        The sample space is the set (collection) of all possible outcomes of the experiment.  An event is some subset of the sample space, that is, an event is a set of outcomes.  The probability of the event is given by:

P(event) = 
 
number of outcomes in the event
number of all possible outcomes 

Experiment

        An experiment is a repeatable process that has more than one possible outcome. Each of the possible outcomes is called a simple event. The actual outcome of any one trial of the experiment is determined by chance.

Experiment: Toss a coin

Two simple events: Head or Tail.

Experiment: Roll a die

Six simple events: 1, 2, 3, 4, 5, or 6.

Experiment: Choose a card from a standard deck

Fifty-two simple events: 2-10 or Jack-Ace of hearts, diamonds, spades, clubs.

Notice that that in all these experiments, you cannot tell exactly what will happen each time you filp a coin, or toss a die, or draw a card from a deck. However, you can list all the possibilities for each type of experiment.

Sample space:       

        A sample space is a set, labeled S or SS, whose elements describe all possible outcomes (simple events) of an experiment.

        When we are trying to find a sample space, all we need do is envision all the things that can happen when we do some experiment, With things like tossing a coin, the outcomes are easy to figure out. Flip a coin and there are only two possible outcomes; get a head or a tail. Toss a die and the possible outcomes are any of the faces on the die; a 1, or a 2, or a 3, etc. As our experiments get more complicated, we will find that the counting arguments we discovered in the previous section will come in handy when we want to find all the possible outcomes of an experiment.

Experiment: Toss a coin

SS={H,T}

Experiment: Roll a die

SS={1,2,3,4,5,6}

Experiment: Choose a card

SS={2-Ace hearts, 2-Ace diamonds, 2-Ace spades, 2-Ace clubs}

Event space:

An event space is a set whose elements describe a specific collection of simple events.

When we refer to a probability of something, we are refering to the the chance of a specific event happening. For instance, the probability of getting a head when we flip a coin is the chance of the specific simple event (a head) appearing. When we toss a die, the probability of getting a six is the probability of the specific simple event of rolling a six. The probability of drawing a King from a standard deck of cards is the chance of pulling a King (a specific event) out of the deck. For example:

Event: Toss a head

Event: Roll a two on a die

Event: Draw a red card

Probability:

Is a rule of correspondence that assignes each event, A, in the sample space a number, called P(A), such that:

1) For any event A, P(A) is between 0 and 1.
2) The sum of the probabilities for all distinct simple events is 1.

        We can see from the form of a probability that we are going to get a fraction or decimal. Remember that any decimal can be represented as a percent by simply multiplying the decimal by 100. Therefore, probabilities can be presented as a fraction, a decimal, or a percent.

Combining Probabilities

Of course, since we are working with fractions, the form of probabilities, then it would seem reasonable to be able to add and multiply probabilities together. We have to be careful when we do this however. In order to combine probabilities we have to consider whether or not the events under question are mutually exclusive events or independent events. 

Independent or related events?

One of the important steps you need to make when considering the probability of two or more events occurring. Is to decide whether they are independent or related events.

A.Independent or Mutually exclusive events

        The probability of throwing a double three with two dice is the result of throwing three with the first die and three with the second die. The total possibilities are, one from six outcomes for the first event and one from six outcomes for the second, Therefore (1/6) * (1/6) = 1/36th or 2.77%.

        The two events are independent, since whatever happens to the first die cannot affect the throw of the second, the probabilities are therefore multiplied, and remain 1/36th.

B.Related or Mutually inclusive events
     1. What happens if we want to throw 1 and 6 in any order?  This now means that we do not mind if the first die is either 1 or 6, as we are still in with a chance.  But with the first die, if 1 falls uppermost, clearly It rules out the possibility of 6 being uppermost, so the two Outcomes, 1 and 6, are mutually inclusive, One result directly affects the other. In this case, the probability of throwing 1 or 6 with the first die is the sum of the two probabilities, 1/6 + 1/6 = 1/3.
     2. The probability of the second die being favourable is still 1/6 as the second die can only be one specific number, a 6 if the first die is 1, and vice versa.
     3. Therefore the probability of throwing 1 and 6 in any order with two dice is 1/3 x 1/6 = 1/18.

Converse probabilities

Often when you work out the probability of an event, you sometimes do not need  to work out the probability of an event occurring you need the opposite. The probability that the event will not occur. For example, The probability of throwing a 1 on a die is 1/6 therefore the probability of a 'non-1' is (1-1/6) which equals 5/6.

The law of large numbers / "The law of averages"

        The theory of probability becomes of enhanced value to gamblers when it is used with the law of large numbers.  The law of large numbers states that:

    “If the probability of a given outcome to an event is P and the event is repeated N times, then the larger N becomes, so the likelihood increases that the closer, in proportion, will be the occurrence of the given outcome to N*P.”

For example:

        If the probability of throwing a double-6 with two dice is 1/36, then the more times we throw the dice, the closer, in proportion, will be the number of double-6s thrown to of the total number of throws. This is, of course, what in everyday language is known as the law of averages.  The overlooking of the vital words 'in proportion' in the above definition leads to much misunderstanding among gamblers.  The 'gambler's fallacy' lies in the idea that “In the long run” chances will even out. Thus if a coin has been spun 100 times, and has landed 60 times head uppermost and 40 times tails, many gamblers will state that tails are now due for a run to get even.  There are fancy names for this belief.  The theory is called the maturity of chances, and the expected run of tails is known as a 'corrective', which will bring the total of tails eventually equal to the total of heads.  The belief is that the 'law' of averages really is a law which states that in the longest of long runs the totals of both heads and tails will eventually become equal.

        In fact, the opposite is really the case.  As the number of tosses gets larger, the probability is that the percentage of heads or tails thrown gets nearer to 50%, but that the difference between the actual number of heads or tails thrown and the number representing 50% gets larger.

        An understanding of the law of the large numbers leads to a realisation that what appear to be fantastic improbabilities are not remarkable at all but, merely to be expected.

Go to Probability Theory

Reference:

1. Rogers, Ben, Pascal (London: Phoenix, 1998)P. 36-38

2.http://www.peterwebb.co.uk/probability.htm

3.Adamson, Donald,Blaise Pascal: Mathematican,Physicist and Thinker about God ( New York,St. Martin's Press,1995)

4.http://members.fortunecity.com/kokhuitan/pascal.html

 

Hosted by www.Geocities.ws

1