Final answer:
Bernardo should use Float or Double data types to track dollars and cents in decimal form for his lottery game, with a preference for Double due to its higher precision.
Step-by-step explanation:
For a game about winning a lottery in which variables track dollars and cents in decimal form, the most appropriate data type to use would be either a Float or a Double. These data types are capable of storing numbers with decimal points which are necessary for representing monetary values that include cents. The choice between Float and Double largely depends on the required precision and the programming language being used.
Float is typically a single-precision 32-bit IEEE 754 floating point, while Double provides double the precision with 64 bits. For most applications involving money, and where precision is important, Double is preferable. However, in financial applications, it's often recommended to use data types specific to monetary values, like BigDecimal in Java, to avoid rounding errors inherent in binary floating-point calculations.