Probstat/notes/basic

จาก Theory Wiki
ไปยังการนำทาง ไปยังการค้นหา
This is part of probstat. These notes are meant to be used in complement to the video lectures. They only contain summary of the materials discussed in the video. Don't use them to avoid watching the clips please.

Random experiments

When we would like to talk about probability, we shall start with a random experiment. After we perform this experiment we get an outcome. The set of all possible outcomes is called a sample space, usually denoted by S.

We are generally interested in outcomes with certain properties, usually referred to as an event. Formally, an event is a subset of the sample space S.

Probability axioms

We would like to assign probabilities to events. Let S denote the sample space. Formally a function P is a probability function if it satisfies the follow 3 axioms.

Axiom 1: For any event E, .

Axiom 2: .

Axiom 3: For any countable sequence of mutually exclusive events , we have that

.

Useful properties

There are various properties that can be proved from the axiom. We list a few here.

1: .

2: .

3: For events A and B, .

4: For events A and B, .

5: For events A and B, .

5a: For events A and B, . This is usually referred to as the union bound.

Conditional probabilities

Suppose that we know that event B has occurred. We denote by the probability that event A occurs given that event B has occurred. Intuitively under this condition, we know that if A is going to occur, it must occur with B, i.e., event AB must occur.

When , we define the probability of event A given that event B has occurred as

.

Independence

Events A and B are independent when the knowledge that A has occurs does not change the probability that B occurs.

Formally, we say that A and B are independent if and only if

This implies that if , .

If you have more than 2 events, independence becomes slightly more complicated. Consider events . We say that these events are mutually independent (or just independent) if and only if, for any subset of indicies, we have that

.

A weaker form is independence is called pair-wise independence. In this case, events are said to be pair-wise independent iff for any different i and j, and are independent (i.e., ).

Bayes' formula

For any events A and B, if we know , and , we can find the "inverse" probability

.

Example