
Conditional independence
Probability theory concept / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about Conditional independence?
Summarize this article for a 10 years old
In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probability, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. If is the hypothesis, and
and
are observations, conditional independence can be stated as an equality:
Part of a series on statistics |
Probability theory |
---|
![]() |
where is the probability of
given both
and
. Since the probability of
given
is the same as the probability of
given both
and
, this equality expresses that
contributes nothing to the certainty of
. In this case,
and
are said to be conditionally independent given
, written symbolically as:
. In the language of causal equality notation, two functions
and
which both depend on a common variable
are described as conditionally independent using the notation
, which is equivalent to the notation
.
The concept of conditional independence is essential to graph-based theories of statistical inference, as it establishes a mathematical relation between a collection of conditional statements and a graphoid.