Final answer:
Absorbance measurements outside the optimal range of 0.3-2 are less accurate because they may be affected by baseline noise, stray light, detector saturation, or nonlinear absorption, all of which leads to deviation from the Beer-Lambert Law's linearity.
Step-by-step explanation:
Absorbance measurements within the range of a = 0.3-2 are generally considered the most accurate due to the linear response specified by the Beer–Lambert Law, which states that absorbance is directly proportional to concentration. Measurements of 0.05 are too low, suggesting that they may be near the limit of detection of the instrument or influenced by baseline noise or stray light, thus providing less reliable data. Similarly, measurements of 2.5 are too high and may indicate that the sample is too concentrated, resulting in saturation of the detector or nonlinear absorption effects that deviate from the linear range of the Beer–Lambert Law. Furthermore, such high absorbance can lead to inner filter effects, where the photons are absorbed before penetrating the bulk of the sample, which also compromises the accuracy of the measurement.
For instance, using the given example of NAD* at 260 nm with a molar absorptivity (Ɛ) of 18,000 L* mol⁻¹*cm⁻¹ and an observed A260 = 1.0, the concentration can be easily calculated to be 5.6 x 10⁻⁵ M, which falls within the optimal range of the Beer–Lambert Law. This relationship becomes unreliable at very low and very high absorbance values.