Top Qs
Timeline
Chat
Perspective

Normal number (computing)

From Wikipedia, the free encyclopedia

Remove ads
Remove ads

In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.

The magnitude of the smallest normal number in a format is given by:

where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format.

Similarly, the magnitude of the largest normal number in a format is given by

where p is the precision of the format in digits and is related to as:

In the IEEE 754 binary and decimal formats, b, p, , and have the following values:[1]

More information , ...

For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 1095 through 9.999999 × 1096.

Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).

Zero is considered neither normal nor subnormal.

Remove ads

See also

References

Loading content...
Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads