Final answer:
Single precision uses 32 bits while double precision uses 64 bits to represent floating-point numbers in computer systems.
Step-by-step explanation:
In computer systems, single precision and double precision are two formats used to represent floating-point numbers. Single precision uses 32 bits to store a floating-point number, while double precision uses 64 bits.
For single precision, the bits are arranged as follows:
- 1 bit for the sign (positive or negative)
- 8 bits for the exponent, which represents the range of the number
- 23 bits for the significant digits (also known as the mantissa or fraction)
For double precision, the bits are arranged as follows:
- 1 bit for the sign (positive or negative)
- 11 bits for the exponent, which represents the range of the number
- 52 bits for the significant digits (also known as the mantissa or fraction)