Results (
Thai) 2:
[Copy]Copied!
the tape measure from a huge tool storage chest, run down to the basement, measure
the wood, run back out to the garage, leave the tape measure, grab the saw,
and then return to the basement with the saw and cut the wood. Now you decide
to bolt some pieces of wood together. So you run to the garage, grab the drill set,
go back down to the basement, drill the holes to put the bolts through, go back to
the garage, leave the drill set, grab one wrench, go back to the basement, find out
the wrench is the wrong size, go back to the tool chest in the garage, grab another
wrench, run back downstairs . . . wait! Would you really work this way? No!
Being a reasonable person, you think to yourself “If I need one wrench, I will
probably need another one of a different size soon anyway, so why not just grab
the whole set of wrenches?” Taking this one step further, you reason “Once I am
done with one certain tool, there is a good chance I will need another soon, so
why not just pack up a small toolbox and take it to the basement?” This way, you
keep the tools you need close at hand, so access is faster. You have just cached
some tools for easy access and quick use! The tools you are less likely to use
remain stored in a location that is further away and requires more time to access.
This is all that cache memory does: It stores data that has been accessed and data
that might be accessed by the CPU in a faster, closer memory.
Another cache analogy is found in grocery shopping. You seldom, if ever, go
to the grocery store to buy one single item. You buy any items you require immediately
in addition to items you will most likely use in the future. The grocery
store is similar to main memory, and your home is the cache. As another example,
consider how many of us carry around an entire phone book. Most of us have a
small address book instead. We enter the names and numbers of people we tend
to call more frequently; looking a number up in our address book is much quicker
than finding a phone book, locating the name, and then getting the number. We
tend to have the address book close at hand, whereas the phone book is probably
located in our home, hidden in an end table or bookcase somewhere. The phone
book is something we do not use frequently, so we can afford to store it in a little
more out of the way location. Comparing the size of our address book to the telephone
book, we see that the address book “memory” is much smaller than that of
a telephone book. But the probability is very high that when we make a call, it is
to someone in our address book.
Students doing research offer another commonplace cache example. Suppose
you are writing a paper on quantum computing. Would you go to the library,
check out one book, return home, get the necessary information from that book,
go back to the library, check out another book, return home, and so on? No, you
would go to the library and check out all the books you might need and bring
them all home. The library is analogous to main memory, and your home is,
again, similar to cache.
And as a last example, consider how one of your authors uses her office. Any
materials she does not need (or has not used for a period of more than six months)
get filed away in a large set of filing cabinets. However, frequently used “data”
remain piled on her desk, close at hand, and easy (sometimes) to find. If she
needs something from a file, she more than likely pulls the entire file, not simply
one or two papers from the folder. The entire file is then added to the pile on her
desk. The filing cabinets are her “main memory” and her desk (with its many
unorganized-looking piles) is the cache.
Cache memory works on the same basic principles as the preceding examples
by copying frequently used data into the cache rather than requiring an access to
main memory to retrieve the data. Cache can be as unorganized as your author’s
desk or as organized as your address book. Either way, however, the data must be
accessible (locatable). Cache memory in a computer differs from our real-life
examples in one important way: The computer really has no way to know, a priori,
what data is most likely to be accessed, so it uses the locality principle and
transfers an entire block from main memory into cache whenever it has to make a
main memory access. If the probability of using something else in that block is
high, then transferring the entire block saves on access time. The cache location
for this new block depends on two things: the cache mapping policy (discussed in
the next section) and the cache size (which affects whether there is room for the
new block).
The size of cache memory can vary enormously. A typical personal computer’s
level 2 (L2) cache is 256K or 512K. Level 1 (L1) cache is smaller, typically
8K or 16K. L1 cache resides on the processor, whereas L2 cache resides
between the CPU and main memory. L1 cache is, therefore, faster than L2 cache.
The relationship between L1 and L2 cache can be illustrated using our grocery
store example: If the store is main memory, you could consider your refrigerator
the L2 cache, and the actual dinner table the L1 cache.
The purpose of cache is to speed up memory accesses by storing recently
used data closer to the CPU, instead of storing it in main memory. Although
cache is not as large as main memory, it is considerably faster. Whereas main
memory is typically composed of DRAM with, say, a 60ns access time, cache is
typically composed of SRAM, providing faster access with a much shorter cycle
time than DRAM (a typical cache access time is 10ns). Cache does not need to
be very large to perform well. A general rule of thumb is to make cache small
enough so that the overall average cost per bit is close to that of main memory,
but large enough to be beneficial. Because this fast memory is quite expensive,
it is not feasible to use the technology found in cache memory to build all of
main memory.
What makes cache “special”? Cache is not accessed by address; it is accessed
by content. For this reason, cache is sometimes called content addressable memory
or CAM. Under most cache mapping schemes, the cache entries must be checked
or searched to see if the value being requested is stored in cache. To simplify this
process of locating the desired data, various cache mapping algorithms are used.
Being translated, please wait..
