Results (
Thai) 2:
[Copy]Copied!
would a least-recently used algorithm. There are many replacement algorithms
that can be used; these are discussed shortly.
Set Associative Cache
Owing to its speed and complexity, associative cache is very expensive. Although
direct mapping is inexpensive, it is very restrictive. To see how direct mapping
limits cache usage, suppose we are running a program on the architecture
described in our previous examples. Suppose the program is using block 0, then
block 16, then 0, then 16, and so on as it executes instructions. Blocks 0 and 16
both map to the same location, which means the program would repeatedly throw
out 0 to bring in 16, then throw out 16 to bring in 0, even though there are additional
blocks in cache not being used. Fully associative cache remedies this problem
by allowing a block from main memory to be placed anywhere. However, it
requires a larger tag to be stored with the block (which results in a larger cache)
in addition to requiring special hardware for searching of all blocks in cache
simultaneously (which implies a more expensive cache). We need a scheme
somewhere in the middle.
The third mapping scheme we introduce is N-way set associative cache mapping,
a combination of these two approaches. This scheme is similar to direct
mapped cache, in that we use the address to map the block to a certain cache location.
The important difference is that instead of mapping to a single cache block,
an address maps to a set of several cache blocks. All sets in cache must be the
same size. This size can vary from cache to cache. For example, in a 2-way set
associative cache, there are two cache blocks per set, as seen in Figure 6.9. In this
figure, we see that set 0 contains two blocks, one that is valid and holds the data
A, B, C, . . . , and another that is not valid. The same is true for Set 1. Set 2 and
Set 3 can also hold two blocks, but currently, only the second block is valid in
each set. In an 8-way set associative cache, there are 8 cache blocks per set.
Direct mapped cache is a special case of N-way set associative cache mapping
where the set size is one.
In set-associative cache mapping, the main memory address is partitioned
into three pieces: the tag field, the set field, and the word field. The tag and word
fields assume the same roles as before; the set field indicates into which cache set
the main memory block maps. Suppose we are using 2-way set associative mapping
with a main memory of 214 words, a cache with 16 blocks, where each block
contains 8 words. If cache consists of a total of 16 blocks, and each set has 2
Being translated, please wait..
