it would look in block 0 of cache, and if it finds a tag of 000010, the word at offset
4 in this block would be returned to the CPU.
Fully Associative Cache
Direct mapped cache is not as expensive as other caches because the mapping
scheme does not require any searching. Each main memory block has a specific
location to which it maps in cache; when a main memory address is converted to
a cache address, the CPU knows exactly where to look in the cache for that memory
block by simply examining the bits in the block field. This is similar to your
address book: The pages often have an alphabetic index, so if you are searching
for “Joe Smith,” you would look under the “s” tab.
Instead of specifying a unique location for each main memory block, we can
look at the opposite extreme: allowing a main memory block to be placed anywhere
in cache. The only way to find a block mapped this way is to search all of
cache. (This is similar to your author’s desk!) This requires the entire cache to be
built from associative memory so it can be searched in parallel. That is, a single
search must compare the requested tag to all tags in cache to determine whether
the desired data block is present in cache. Associative memory requires special
hardware to allow associative searching, and is, thus, quite expensive.
Using associative mapping, the main memory address is partitioned into two
pieces, the tag and the word. For example, using our previous memory configuration
with 214 words, a cache with 16 blocks, and blocks of 8 words, we see from
Figure 6.8 that the word field is still 3 bits, but now the tag field is 11 bits. This tag
must be stored with each block in cache. When the cache is searched for a specific
main memory block, the tag field of the main memory address is compared to all
the valid tag fields in cache; if a match is found, the block is found. (Remember,
the tag uniquely identifies a main memory block.) If there is no match, we have a
cache miss and the block must be transferred from main memory.
With direct mapping, if a block already occupies the cache location where a
new block must be placed, the block currently in cache is removed (it is written
back to main memory if it has been modified or simply overwritten if it has not
been changed). With fully associative mapping, when cache is full, we need a
replacement algorithm to decide which block we wish to throw out of cache (we
call this our victim block). A simple first-in, first-out algorithm would work, as