Final answer:
Computers require programs with robust testing and validation mechanisms to differentiate between good and bad data. The computer's output is determined by its programming and the data it processes, much like behaviour is driven by genetic make-up and experiences. The interplay of data validation and human oversight is critical for obtaining reliable results and recognizing unforeseen patterns.
Step-by-step explanation:
Programs indeed should be designed to reject bad data to ensure the integrity and accuracy of computing processes. A computer on its own cannot distinguish between good data and bad data; it processes all input based on the logic coded within its software. The responsibility for discerning valid input from invalid input lies within the program's algorithms, necessitating thorough testing and validation mechanisms.
When software is released with known problems, these can be due to challenges in recreating specific issues, as mentioned in Stephen Chen's 'Untitled' work. This highlights the need for robust testing programs like the fictional 'test program X' with a 1 percent error generation chance, which attempts to systematically uncover and fix those issues.
Indeed, input validation is akin to human experience influencing behavior. Just as genetic makeup and experiences dictate human actions, a computer's program and the data it processes determine its output. This reinforces the importance of delivering quality data to obtain reliable results, demonstrating the parallel between human and computer operation.
Finally, it's crucial to note that sometimes even the best algorithms can miss anomalies that the human eye can catch, as exemplified by the Kepler mission's citizen scientist program. This suggests that while computers are extremely adept at processing vast amounts of data, there is still value in human oversight to recognize patterns or instances that prerecorded logic may not anticipate.