212k views
5 votes
What is the maximum quantization error if we use a 12-bit ADC for signals?

User Jmoz
by
7.7k points

1 Answer

3 votes

Final answer:

The maximum quantization error for a 12-bit ADC is half the value of one LSB, which can be calculated by dividing the full scale range by 8192. For a 0 to 5V ADC, the error is 0.61mV.

Step-by-step explanation:

The maximum quantization error in an analog-to-digital conversion (ADC) using a 12-bit ADC can be determined by the formula which is half the value of the Least Significant Bit (LSB). Given that a 12-bit ADC has 212 = 4096 discrete levels, the range of analog values is divided into these levels. To find the value of one LSB, you take the full scale range (FSR) and divide it by the number of levels. The quantization error is then half of an LSB. Thus, the maximum quantization error is the voltage corresponding to FSR/8192.

For example, if the full scale range of the ADC is 0 to 5 volts, then each step (LSB) is equal to 5V/4096. The maximum quantization error would, therefore, be (5V/4096)/2 = 0.00061V or 0.61mV. The quantization error represents the inherent uncertainty in digital representation of an analog signal using a finite number of bits, and the smaller the error, the more accurate the digital representation of the signal

User Kindra
by
8.9k points