Answer:
The correct answer to the following question will be Option A.
Step-by-step explanation:
The given question is incomplete and the complete question will be:
The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss. Which process should you implement?
Append metadata to file the body.
Compress individual files.
Name files with a random prefix pattern.
Save files to one bucket
(B)
Batch every 10,000 events with a single manifest file for metadata.
Compress event files and manifest files into a single archive file.
Name files using serverName-EventSequence.
Create a new bucket if a bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to the existing bucket.
(C)
Compress individual files.
Name files with serverName-EventSequence.
Save files to one bucket
Set custom metadata headers for each object after saving.
(D)
Append metadata to file the body.
Compress individual files.
Name files with serverName-Timestamp.
Create a new bucket if the bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to the existing bucket.
- Therefore, in option A, a name file can be used with a prefix pattern randomly. So, it will be the right answer
- The other three options such as C, B and D are incorrect because they are using name files with "serverName-EventSequence" or "serverName-Timestamp" which causes the distribution in the backend irregularly.