Top Qs
Timeline
Chat
Perspective

Probability vector

Vector with non-negative entries that add up to one From Wikipedia, the free encyclopedia

Remove ads

In mathematics and statistics, a probability vector or stochastic vector is a vector with non-negative entries that add up to one.

Underlying every probability vector is an experiment that can produce an outcome. To connect this experiment to mathematics, one introduces a discrete random variable, which is a function that assigns a numerical value to each possible outcome. For example, if the experiment consists of rolling a single die, the possible values of this random variable are the integers 1,2,…,6. The associated probability vector has six components, each representing the probability of obtaining the corresponding outcome. More generally, a probability vector of length n represents the distribution of probabilities across the n possible numerical outcomes of a random variable.[1]

The vector gives us the probability mass function of that random variable, which is the standard way of characterizing a discrete probability distribution.[2]

Remove ads

Examples

Summarize
Perspective

Here are some examples of probability vectors. The vectors can be either columns or rows.[3]


Remove ads

Properties

  • The mean of the components of any probability vector is .[4]
  • The Euclidean length of a probability vector is related to the variance of its components by [5]
.
  • This expression for length reaches its minimum value of when all components are equal, with .[3]
  • The longest probability vector has the value 1 in a single component and 0 in all others, and has a length of 1.[3]
  • The shortest vector corresponds to maximum uncertainty, the longest to maximum certainty.
  • The variance of a probability vector satisfies:
The lower bound occurs when all components are equal , and the upper bound when one component equals and the rest are .[6]
Remove ads

Significance of the bounds on variance

The bounds on variance show that as the number of possible outcomes increases, the variance necessarily decreases toward zero. As a result, the uncertainty associated with any single outcome increases because the components of the probability vector become more nearly equal. In empirical work, this often motivates binning the outcomes to reduce ; although this discards some information contained in the original outcomes, it allows the coarser-grained structure of the distribution to be revealed. The decrease in variance with increasing reflects the same tendency toward uniformity that underlies entropy in information theory and statistical mechanics.[7]

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads