Final answer:
C. Data is split into blocks and distributed to DataNodes.HDFS splits data into blocks and distributes them across DataNodes, replicating each block for fault tolerance. It does not immediately replicate to all DataNodes,
Step-by-step explanation:
The action performed by HDFS when data is written to a Hadoop cluster is described by statement C: Data is split into blocks and distributed to DataNodes. This is a core feature of the Hadoop Distributed File System (HDFS). When data is written to HDFS, it is divided into blocks, usually with a size of 128MB or 256MB by default.
These blocks are then distributed across different DataNodes in the cluster. The system also replicates each block to multiple DataNodes for fault tolerance, typically creating three copies by default, but this does not happen immediately to all DataNodes as suggested in option A. Option B is incorrect as well, the NameNode stores metadata, but not the actual data.
Option D is also incorrect because HDFS does not automatically compress data before storage, although users can choose to compress files before writing them to HDFS.