14.6k views
0 votes
5. 3 By convention, a cache is named according to the amount of data it contains (i. E. , a 4 KiB cache can hold 4 KiB of data); however, caches also require SRAM to store metadata such as tags and valid bits. For this exercise, you will examine how a cache's configuration affects the total amount of SRAM needed to implement it as well as the performance of the cache. For all parts, assume that the caches are byte addressable, and that addresses and words are 64 bits. 5. 3. 1 [10] <$5. 3> Calculate the total number of bits required to implement a 32 KiB cache with two-word blocks. 5. 3. 2 [10] <$5. 3> Calculate the total number of bits required to implement a 64 KiB cache with 16-word blocks. How much bigger is this cache than the 32 KiB cache described in Exercise 5. 3. 1? (Notice that, by changing the block size, we doubled the amount of data without doubling the total size of the cache. ) 5. 3. 3 (5] <$5. 3> Explain why this 64 KiB cache, despite its larger data size, might provide slower performance than the first cache. 5. 3. 4 [10] <$$5. 3, 5. 4> Generate a series of read requests that have a lower miss rate on a 32 KiB two-way set associative cache than on the cache described in Exercise 5. 3. 1

1 Answer

5 votes

Answer:

Sure, I can help you with that.

5. 3. 1

A 32 KiB cache with two-word blocks has a total of 32 KiB / 2 words/block = 16 KiB of data. Each block also requires a tag and a valid bit, for a total of 64 bits/block + 1 bit/block = 65 bits/block. Therefore, the total number of bits required to implement a 32 KiB cache with two-word blocks is 16 KiB * 65 bits/block = 1040 KiB.

5. 3. 2

A 64 KiB cache with 16-word blocks has a total of 64 KiB / 16 words/block = 4 KiB of data. Each block also requires a tag and a valid bit, for a total of 64 bits/block + 1 bit/block = 65 bits/block. Therefore, the total number of bits required to implement a 64 KiB cache with 16-word blocks is 4 KiB * 65 bits/block = 260 KiB. This cache is 1040 KiB - 260 KiB = 780 KiB larger than the 32 KiB cache described in Exercise 5. 3. 1.

5. 3. 3

The 64 KiB cache, despite its larger data size, might provide slower performance than the first cache because it has a larger miss rate. This is because the 64 KiB cache has a higher associativity, which means that there are more possible ways to store a block in the cache. This increases the chances that a block will be evicted from the cache when a new block is requested.

5. 3. 4

A series of read requests that have a lower miss rate on a 32 KiB two-way set associative cache than on the cache described in Exercise 5. 3. 1 might include the following:

A sequence of addresses that are evenly distributed throughout the cache.

A sequence of addresses that are all within a small range of the cache.

A sequence of addresses that are all within a single set of the cache.

These types of sequences are more likely to hit in the cache because they are less likely to conflict with other blocks that are already in the cache.

Step-by-step explanation: