Results (
Thai) 2:
[Copy]Copied!
When we studied cache, we introduced the notion of effective access time. We
also need to address EAT while using virtual memory. There is a time penalty
associated with virtual memory: For each memory access that the processor generates,
there must now be two physical memory accesses—one to reference the
page table and one to reference the actual data we wish to access. It is easy to see
how this affects the effective access time. Suppose a main memory access
requires 200ns and that the page fault rate is 1% (99% of the time we find the
page we need in memory). Assume it costs us 10ms to access a page not in memory
(this time of 10ms includes the time necessary to transfer the page into memory,
update the page table, and access the data). The effective access time for a
memory access is now:
EAT = .99(200ns + 200ns) + .01(10ms) = 100,396ns
Even if 100% of the pages were in main memory, the effective access time would be:
EAT = 1.00(200ns + 200ns) = 400ns,
which is double the access time of memory. Accessing the page table costs us an
additional memory access because the page table itself is stored in main memory.
We can speed up the page table lookup by storing the most recent page
lookup values in a page table cache called a translation look-aside buffer (TLB).
Each TLB entry consists of a virtual page number and its corresponding frame
Being translated, please wait..
