Sponsored Links
-->

Sunday, February 4, 2018

How to Calculate Conditional Probability - YouTube
src: i.ytimg.com

In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value x of X as a parameter. When both "X" and "Y" are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable.

If the conditional distribution of Y given X is a continuous distribution, then its probability density function is known as the conditional density function. The properties of a conditional distribution, such as the moments, are often referred to by corresponding names such as the conditional mean and conditional variance.

More generally, one can refer to the conditional distribution of a subset of a set of more than two variables; this conditional distribution is contingent on the values of all the remaining variables, and if more than one variable is included in the subset then this conditional distribution is the conditional joint distribution of the included variables.


Video Conditional probability distribution



Discrete distributions

For discrete random variables, the conditional probability mass function of Y given the occurrence of the value x of X can be written according to its definition as:

P Y ( y | X = x ) = P ( Y = y | X = x ) = P ( X = x   ? Y = y ) P ( X = x ) . {\displaystyle P_{Y}(y\mid X=x)=P(Y=y\mid X=x)={\frac {P(X=x\ \cap Y=y)}{P(X=x)}}.}

Due to the occurrence of P ( X = x ) {\displaystyle P(X=x)} in a denominator, this is defined only for non-zero (hence strictly positive) P ( X = x ) . {\displaystyle P(X=x).}

The relation with the probability distribution of X given Y is:

P ( Y = y | X = x ) P ( X = x ) = P ( X = x   ? Y = y ) = P ( X = x | Y = y ) P ( Y = y ) . {\displaystyle P(Y=y\mid X=x)P(X=x)=P(X=x\ \cap Y=y)=P(X=x\mid Y=y)P(Y=y).}

Example

Consider the roll of a fair die and let A = 1 if the number is even (i.e. 2, 4, or 6) and A = 0 otherwise. Furthermore, let B = 1 if the number is prime (i.e. 2, 3, or 5) and B = 0 otherwise.

Then the unconditional probability that A = 1 is 3/6 = 1/2 (since there are six possible rolls of the die, of which three are even), whereas the probability that A = 1 conditional on B = 1 is 1/3 (since there are three possible prime number rolls--2, 3, and 5--of which one is even).


Maps Conditional probability distribution



Continuous distributions

Similarly for continuous random variables, the conditional probability density function of Y given the occurrence of the value x of X can be written as

f Y ( y | X = x ) = f X , Y ( x , y ) f X ( x ) , {\displaystyle f_{Y}(y\mid X=x)={\frac {f_{X,Y}(x,y)}{f_{X}(x)}},}

where fX,Y(x, y) gives the joint density of X and Y, while fX(x) gives the marginal density for X. Also in this case it is necessary that f X ( x ) > 0 {\displaystyle f_{X}(x)>0} .

The relation with the probability distribution of X given Y is given by:

f Y ( y | X = x ) f X ( x ) = f X , Y ( x , y ) = f X ( x | Y = y ) f Y ( y ) . {\displaystyle f_{Y}(y\mid X=x)f_{X}(x)=f_{X,Y}(x,y)=f_{X}(x\mid Y=y)f_{Y}(y).}

The concept of the conditional distribution of a continuous random variable is not as intuitive as it might seem: Borel's paradox shows that conditional probability density functions need not be invariant under coordinate transformations.

Example

The graph shows a bivariate normal joint density for random variables X and Y. To see the distribution of Y conditional on X = 70, one can first visualize the line X = 70 in the X, Y plane, and then visualize the plane containing that line and perpendicular to the X, Y plane. The intersection of that plane with the joint normal density, once rescaled to give unit area under the intersection, is the relevant conditional density of Y.


How to Find Conditional Probabilities and Determine if Events are ...
src: i.ytimg.com


Relation to independence

Random variables X, Y are independent if and only if the conditional distribution of Y given X is, for all possible realizations of X, equal to the unconditional distribution of Y. For discrete random variables this means P(Y = y | X = x) = P(Y = y) for all possible x and y. For continuous random variables X and Y, having a joint density function, it means fY(y | X = x) = fY(y) for all possible x and y.


Overlapping Gene Expression in Fetal Mouse Intestine Development ...
src: cancerres.aacrjournals.org


Properties

Seen as a function of y for given x, P(Y = y | X = x) is a probability and so the sum over all y (or integral if it is a conditional probability density) is 1. Seen as a function of x for given y, it is a likelihood function, so that the sum over all x need not be 1.


Conditional probability mass function - YouTube
src: i.ytimg.com


Measure-theoretic formulation

Let ( ? , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} be a probability space, G ? F {\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}} a ? {\displaystyle \sigma } -field in F {\displaystyle {\mathcal {F}}} , and X : ? -> R {\displaystyle X:\Omega \to \mathbb {R} } a real-valued random variable (measurable with respect to the Borel ? {\displaystyle \sigma } -field R 1 {\displaystyle {\mathcal {R}}^{1}} on R {\displaystyle \mathbb {R} } ). It can be shown that there exists a function ? : R 1 × ? -> R {\displaystyle \mu :{\mathcal {R}}^{1}\times \Omega \to \mathbb {R} } such that ? ( ? , ? ) {\displaystyle \mu (\cdot ,\omega )} is a probability measure on R 1 {\displaystyle {\mathcal {R}}^{1}} for each ? ? ? {\displaystyle \omega \in \Omega } (i.e., it is regular) and ? ( H , ? ) = P ( X ? H | G ) {\displaystyle \mu (H,\cdot )=P(X\in H\mid {\mathcal {G}})} (almost surely) for every H ? R 1 {\displaystyle H\in {\mathcal {R}}^{1}} . For any ? ? ? {\displaystyle \omega \in \Omega } , the function ? ( ? , ? ) : R 1 -> R {\displaystyle \mu (\cdot ,\omega ):{\mathcal {R}}^{1}\to \mathbb {R} } is called a conditional probability distribution of X {\displaystyle X} given G {\displaystyle {\mathcal {G}}} . In this case,

E [ X | G ] = ? - ? ? x ? ( d x , ? ) {\displaystyle E[X\mid {\mathcal {G}}]=\int _{-\infty }^{\infty }x\,\mu (dx,\cdot )}

almost surely.


Proceedings of the Royal Society of London A: Mathematical ...
src: rspa.royalsocietypublishing.org


Relation to conditional expectation

For any event A ? A ? B {\displaystyle A\in {\mathcal {A}}\supseteq {\mathcal {B}}} , define the indicator function:

1 A ( ? ) = { 1 if  ? ? A , 0 if  ? ? A , {\displaystyle \mathbf {1} _{A}(\omega )={\begin{cases}1\;&{\text{if }}\omega \in A,\\0\;&{\text{if }}\omega \notin A,\end{cases}}}

which is a random variable. Note that the expectation of this random variable is equal to the probability of A itself:

E ( 1 A ) = P ( A ) . {\displaystyle \operatorname {E} (\mathbf {1} _{A})=\operatorname {P} (A).\;}

Then the conditional probability given B {\displaystyle \scriptstyle {\mathcal {B}}} is a function P ( ? | B ) : A × ? -> ( 0 , 1 ) {\displaystyle \scriptstyle \operatorname {P} (\cdot \mid {\mathcal {B}}):{\mathcal {A}}\times \Omega \to (0,1)} such that P ( A | B ) {\displaystyle \scriptstyle \operatorname {P} (A\mid {\mathcal {B}})} is the conditional expectation of the indicator function for A:

P ( A | B ) = E ( 1 A | B ) {\displaystyle \operatorname {P} (A\mid {\mathcal {B}})=\operatorname {E} (\mathbf {1} _{A}\mid {\mathcal {B}})\;}

In other words, P ( A | B ) {\displaystyle \scriptstyle \operatorname {P} (A\mid {\mathcal {B}})} is a B {\displaystyle \scriptstyle {\mathcal {B}}} -measurable function satisfying

? B P ( A | B ) ( ? ) d P ( ? ) = P ( A ? B ) for all A ? A , B ? B . {\displaystyle \int _{B}\operatorname {P} (A\mid {\mathcal {B}})(\omega )\,\mathrm {d} \operatorname {P} (\omega )=\operatorname {P} (A\cap B)\qquad {\text{for all}}\quad A\in {\mathcal {A}},B\in {\mathcal {B}}.}

A conditional probability is regular if P ( ? | B ) ( ? ) {\displaystyle \scriptstyle \operatorname {P} (\cdot \mid {\mathcal {B}})(\omega )} is also a probability measure for all ? ? ?. An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation.

  • For the trivial sigma algebra B = { ? , ? } {\displaystyle {\mathcal {B}}=\{\emptyset ,\Omega \}} the conditional probability is a constant function, P ( A | { ? , ? } ) ? P ( A ) . {\displaystyle \operatorname {P} \!\left(A\mid \{\emptyset ,\Omega \}\right)\equiv \operatorname {P} (A).}
  • For A ? B {\displaystyle A\in {\mathcal {B}}} , as outlined above, P ( A | B ) = 1 A . {\displaystyle \operatorname {P} (A\mid {\mathcal {B}})=1_{A}.} .

Conditional probability density function - YouTube
src: i.ytimg.com


See also

  • Conditioning (probability)
  • Conditional probability
  • Regular conditional probability
  • Bayes' theorem

MA305 Conditional Probability Bayes' Theorem - ppt video online ...
src: slideplayer.com


Notes


Probability review #4: Multiple random variables - YouTube
src: i.ytimg.com


References

  • Billingsley, Patrick (1995). Probability and Measure (3rd ed.). New York: John Wiley and Sons. 

Source of article : Wikipedia