 # Conditional independence

#### Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Conditional independence?

In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probability, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. If $A$ is the hypothesis, and $B$ and $C$ are observations, conditional independence can be stated as an equality:
$P(A\mid B,C)=P(A\mid C)$ where $P(A\mid B,C)$ is the probability of $A$ given both $B$ and $C$ . Since the probability of $A$ given $C$ is the same as the probability of $A$ given both $B$ and $C$ , this equality expresses that $B$ contributes nothing to the certainty of $A$ . In this case, $A$ and $B$ are said to be conditionally independent given $C$ , written symbolically as: $(A\perp \!\!\!\perp B\mid C)$ . In the language of causal equality notation, two functions $f(y)$ and $g(y)$ which both depend on a common variable $y$ are described as conditionally independent using the notation $f\left(y\right)~{\overset {\curvearrowleft \curvearrowright }{=}}~g\left(y\right)$ , which is equivalent to the notation $P(f\mid g,y)=P(f\mid y)$ .