Top Qs
Timeline
Chat
Perspective

Orders of magnitude (data)

Computer data measurements and scales From Wikipedia, the free encyclopedia

Remove ads

The order of magnitude of data may be specified in strictly standards-conformant units of information and multiples of the bit and byte with decimal scaling, or using historically common usages of a few multiplier prefixes in a binary interpretation which has been common in computing until new binary prefixes were defined in the 1990s.

Remove ads

Units of measure

The byte has been a commonly used unit of measure for much of the information age to refer to a number of bits. In the early days of computing, it was used for differing numbers of bits based on convention and computer hardware design, but today means 8 bits. A more accurate, but less commonly used name for 8 bits is octet.

Commonly, a decimal SI metric prefix (such as kilo-) is used with bit and byte to express larger sizes (kilobit, kilobyte). But, this is usually inaccurate since these prefixes are decimal, whereas binary hardware size is usually binary. Customarily, each metric prefix, 1000n, is used to mean a close approximation of a binary multiple, 1024n. Often, this distinction is implicit, and therefore, use of metric prefixes can lead to confusion. The IEC binary prefixes (such as kibi-) allow for accurate description of hardware sizes, but are not commonly used.[1][2]

Remove ads

Entropy

This page references two kinds of entropy which are not entirely equivalent. For comparison, the Avogadro constant is 6.02214076×1023 entities per mole, based upon the number of atoms in 12 grams of carbon-12 isotope. See Entropy in thermodynamics and information theory.

Remove ads

List

Summarize
Perspective
More information Binary (bits), Decimal ...
Remove ads

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads