ผลต่างระหว่างรุ่นของ "Probstat/notes/basic"
Jittat (คุย | มีส่วนร่วม) (หน้าที่ถูกสร้างด้วย ': ''This is part of probstat. These notes are meant to be used in complement to the video lectures. They only contain summary of th...') |
Jittat (คุย | มีส่วนร่วม) |
||
(ไม่แสดง 21 รุ่นระหว่างกลางโดยผู้ใช้คนเดียวกัน) | |||
แถว 3: | แถว 3: | ||
== Random experiments == | == Random experiments == | ||
− | When we would like to talk about probability, we shall start with a random experiment. After we perform this experiment we get an '''outcome'''. The set of all possible outcomes is called a '''sample space'''. | + | When we would like to talk about probability, we shall start with a random experiment. After we perform this experiment we get an '''outcome'''. The set of all possible outcomes is called a '''sample space''', usually denoted by ''S''. |
+ | |||
+ | We are generally interested in outcomes with certain properties, usually referred to as an event. Formally, an '''event''' is a subset of the sample space ''S''. | ||
+ | |||
+ | == Probability axioms == | ||
+ | |||
+ | We would like to assign probabilities to events. Let ''S'' denote the sample space. Formally a function ''P'' is a probability function if it satisfies the follow 3 axioms. | ||
+ | |||
+ | {{กล่องฟ้า|'''Axiom 1:''' For any event ''E'', <math>0\leq P(E)\leq 1</math>.}} | ||
+ | |||
+ | {{กล่องฟ้า|'''Axiom 2:''' <math>P(S)=1</math>.}} | ||
+ | |||
+ | {{กล่องฟ้า|'''Axiom 3:''' For any countable sequence of mutually exclusive events <math>E_1,E_2,\ldots</math>, we have that | ||
+ | |||
+ | <center><math>P(\bigcup_i E_i) = \sum_i P(E_i)</math>.</center> | ||
+ | }} | ||
+ | |||
+ | == Useful properties == | ||
+ | There are various properties that can be proved from the axiom. We list a few here. | ||
+ | |||
+ | '''1:''' <math>P(\empty) = 0</math>. | ||
+ | |||
+ | '''2:''' <math>P(A^c) = 1 - P(A)</math>. | ||
+ | |||
+ | '''3:''' For events ''A'' and ''B'', <math>P(A\cup B)\geq P(A)</math>. | ||
+ | |||
+ | '''4:''' For events ''A'' and ''B'', <math>P(A\cap B)\leq P(A)</math>. | ||
+ | |||
+ | '''5:''' For events ''A'' and ''B'', <math>P(A\cup B) = P(A) + P(B) - P(AB)</math>. | ||
+ | |||
+ | '''5a:''' For events ''A'' and ''B'', <math>P(A\cup B) \leq P(A) + P(B)</math>. This is usually referred to as the union bound. | ||
+ | |||
+ | == Conditional probabilities == | ||
+ | Suppose that we know that event ''B'' has occurred. We denote by <math>P(A|B)</math> the probability that event ''A'' occurs given that event ''B'' has occurred. Intuitively under this condition, we know that if ''A'' is going to occur, it must occur with ''B'', i.e., event ''AB'' must occur. | ||
+ | |||
+ | {{กล่องฟ้า|When <math>P(B)\neq 0</math>, we define the probability of event ''A'' given that event ''B'' has occurred as | ||
+ | |||
+ | <center><math>P(A|B)=\frac{P(AB)}{P(B)}</math>.</center> | ||
+ | }} | ||
+ | |||
+ | == Independence == | ||
+ | Events ''A'' and ''B'' are independent when the knowledge that ''A'' has occurs does not change the probability that ''B'' occurs. | ||
+ | |||
+ | {{กล่องฟ้า|Formally, we say that ''A'' and ''B'' are '''independent''' if and only if | ||
+ | |||
+ | <center><math>P(AB)=P(A)P(B)</math></center> | ||
+ | }} | ||
+ | |||
+ | This implies that if <math>P(A)\neq 0</math>, <math>P(B)=P(B|A)</math>. | ||
+ | |||
+ | If you have more than 2 events, independence becomes slightly more complicated. Consider events <math>A_1,A_2,\ldots,A_n</math>. We say that these events are '''mutually independent''' (or just independent) if and only if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> of indicies, we have that | ||
+ | |||
+ | <math>P(\bigcap_{i\in I} A_i) = \prod_{i\in I}P(A_i)</math>. | ||
+ | |||
+ | A weaker form is independence is called pair-wise independence. In this case, events <math>A_1,A_2,\ldots,A_n</math> are said to be '''pair-wise independent''' iff for any different ''i'' and ''j'', <math>A_i</math> and <math>A_j</math> are independent (i.e., <math>P(A_iA_j)=P(A_i)P(A_j)</math>). | ||
+ | |||
+ | == Bayes' formula == | ||
+ | For any events ''A'' and ''B'', if we know <math>P(A)</math>, <math>P(B|A)</math> and <math>P(B|A^{c})</math>, we can find the "inverse" probability | ||
+ | |||
+ | <center> | ||
+ | <math>P(A|B) = \frac{P(AB)}{P(B)} = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|A^c)P(A^c)} = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|A^c)(1 - P(A))}</math>. | ||
+ | </center> | ||
+ | |||
+ | === Example === |
รุ่นแก้ไขปัจจุบันเมื่อ 03:58, 18 กันยายน 2557
- This is part of probstat. These notes are meant to be used in complement to the video lectures. They only contain summary of the materials discussed in the video. Don't use them to avoid watching the clips please.
เนื้อหา
Random experiments
When we would like to talk about probability, we shall start with a random experiment. After we perform this experiment we get an outcome. The set of all possible outcomes is called a sample space, usually denoted by S.
We are generally interested in outcomes with certain properties, usually referred to as an event. Formally, an event is a subset of the sample space S.
Probability axioms
We would like to assign probabilities to events. Let S denote the sample space. Formally a function P is a probability function if it satisfies the follow 3 axioms.
Axiom 1: For any event E, .
Axiom 2: .
Axiom 3: For any countable sequence of mutually exclusive events , we have that
Useful properties
There are various properties that can be proved from the axiom. We list a few here.
1: .
2: .
3: For events A and B, .
4: For events A and B, .
5: For events A and B, .
5a: For events A and B, . This is usually referred to as the union bound.
Conditional probabilities
Suppose that we know that event B has occurred. We denote by the probability that event A occurs given that event B has occurred. Intuitively under this condition, we know that if A is going to occur, it must occur with B, i.e., event AB must occur.
When , we define the probability of event A given that event B has occurred as
Independence
Events A and B are independent when the knowledge that A has occurs does not change the probability that B occurs.
Formally, we say that A and B are independent if and only if
This implies that if , .
If you have more than 2 events, independence becomes slightly more complicated. Consider events . We say that these events are mutually independent (or just independent) if and only if, for any subset of indicies, we have that
.
A weaker form is independence is called pair-wise independence. In this case, events are said to be pair-wise independent iff for any different i and j, and are independent (i.e., ).
Bayes' formula
For any events A and B, if we know , and , we can find the "inverse" probability
.