Results (
Thai) 1:
[Copy]Copied!
would a least-recently used algorithm. There are many replacement algorithmsthat can be used; these are discussed shortly.Set Associative CacheOwing to its speed and complexity, associative cache is very expensive. Althoughdirect mapping is inexpensive, it is very restrictive. To see how direct mappinglimits cache usage, suppose we are running a program on the architecturedescribed in our previous examples. Suppose the program is using block 0, thenblock 16, then 0, then 16, and so on as it executes instructions. Blocks 0 and 16both map to the same location, which means the program would repeatedly throwout 0 to bring in 16, then throw out 16 to bring in 0, even though there are additionalblocks in cache not being used. Fully associative cache remedies this problemby allowing a block from main memory to be placed anywhere. However, itrequires a larger tag to be stored with the block (which results in a larger cache)in addition to requiring special hardware for searching of all blocks in cachesimultaneously (which implies a more expensive cache). We need a schemesomewhere in the middle.The third mapping scheme we introduce is N-way set associative cache mapping,a combination of these two approaches. This scheme is similar to directmapped cache, in that we use the address to map the block to a certain cache location.The important difference is that instead of mapping to a single cache block,an address maps to a set of several cache blocks. All sets in cache must be thesame size. This size can vary from cache to cache. For example, in a 2-way setassociative cache, there are two cache blocks per set, as seen in Figure 6.9. In thisfigure, we see that set 0 contains two blocks, one that is valid and holds the dataA, B, C, . . . , and another that is not valid. The same is true for Set 1. Set 2 andSet 3 can also hold two blocks, but currently, only the second block is valid ineach set. In an 8-way set associative cache, there are 8 cache blocks per set.Direct mapped cache is a special case of N-way set associative cache mappingwhere the set size is one.In set-associative cache mapping, the main memory address is partitionedinto three pieces: the tag field, the set field, and the word field. The tag and wordfields assume the same roles as before; the set field indicates into which cache setthe main memory block maps. Suppose we are using 2-way set associative mappingwith a main memory of 214 words, a cache with 16 blocks, where each blockcontains 8 words. If cache consists of a total of 16 blocks, and each set has 2
Being translated, please wait..
