For cache to be functional, it must store useful data. However, this d translation - For cache to be functional, it must store useful data. However, this d Thai how to say

For cache to be functional, it must

For cache to be functional, it must store useful data. However, this data becomes
useless if the CPU can’t find it. When accessing data or instructions, the CPU
first generates a main memory address. If the data has been copied to cache, the
address of the data in cache is not the same as the main memory address. For
example, data located at main memory address 2E3 could be located in the very
first location in cache. How, then, does the CPU locate data when it has been
copied into cache? The CPU uses a specific mapping scheme that “converts” the
main memory address into a cache location.
This address conversion is done by giving special significance to the bits in
the main memory address. We first divide the bits into distinct groups we call
fields. Depending on the mapping scheme, we may have two or three fields. How
we use these fields depends on the particular mapping scheme being used. The
mapping scheme determines where the data is placed when it is originally copied
into cache and also provides a method for the CPU to find previously copied data
when searching cache.
Before we discuss these mapping schemes, it is important to understand how
data is copied into cache. Main memory and cache are both divided into the same
size blocks (the size of these blocks varies). When a memory address is generated,
cache is searched first to see if the required word exists there. When the
requested word is not found in cache, the entire main memory block in which the
word resides is loaded into cache. As previously mentioned, this scheme is successful
because of the principle of locality—if a word was just referenced, there
is a good chance words in the same general vicinity will soon be referenced as
well. Therefore, one missed word often results in several found words. For example,
when you are in the basement and you first need tools, you have a “miss” and
must go to the garage. If you gather up a set of tools that you might need and
return to the basement, you hope that you’ll have several “hits” while working on
your home improvement project and don’t have to make many more trips to the
garage. Because accessing a cache word (a tool already in the basement) is faster
than accessing a main memory word (going to the garage yet again!), cache memory
speeds up the overall access time.
So, how do we use fields in the main memory address? One field of the
main memory address points us to a location in cache in which the data resides
if it is resident in cache (this is called a cache hit), or where it is to be placed if
it is not resident (which is called a cache miss). (This is slightly different for
associative mapped cache, which we discuss shortly.) The cache block referenced
is then checked to see if it is valid. This is done by associating a valid bit
with each cache block. A valid bit of 0 means the cache block is not valid (we
have a cache miss) and we must access main memory. A valid bit of 1 means it
is valid (we may have a cache hit but we need to complete one more step before
we know for sure). We then compare the tag in the cache block to the tag field
of our address. (The tag is a special group of bits derived from the main memory
address that is stored with its corresponding block in cache.) If the tags are
the same, then we have found the desired cache block (we have a cache hit). At
this point we need to locate the desired word in the block; this can be done
using a different portion of the main memory address called the word field. All
cache mapping schemes require a word field; however, the remaining fields are
determined by the mapping scheme. We discuss the three main cache mapping
schemes on the next page.
Direct Mapped Cache
Direct mapped cache assigns cache mappings using a modular approach. Because
there are more main memory blocks than there are cache blocks, it should be
clear that main memory blocks compete for cache locations. Direct mapping
maps block X of main memory to block Y of cache, mod N, where N is the total
number of blocks in cache. For example, if cache contains 10 blocks, then main
memory block 0 maps to cache block 0, main memory block 1 maps to cache
block 1, . . . , main memory block 9 maps to cache block 9, and main memory
block 10 maps to cache block 0. This is illustrated in Figure 6.2. Thus, main
memory blocks 0 and 10 (and 20, 30, and so on) all compete for cache block 0.
You may be wondering, if main memory blocks 0 and 10 both map to cache
block 0, how does the CPU know which block actually resides in cache block 0 at
any given time? The answer is that each block is copied to cache and identified
0/5000
From: -
To: -
Results (Thai) 1: [Copy]
Copied!
For cache to be functional, it must store useful data. However, this data becomesuseless if the CPU can’t find it. When accessing data or instructions, the CPUfirst generates a main memory address. If the data has been copied to cache, theaddress of the data in cache is not the same as the main memory address. Forexample, data located at main memory address 2E3 could be located in the veryfirst location in cache. How, then, does the CPU locate data when it has beencopied into cache? The CPU uses a specific mapping scheme that “converts” themain memory address into a cache location.This address conversion is done by giving special significance to the bits inthe main memory address. We first divide the bits into distinct groups we callfields. Depending on the mapping scheme, we may have two or three fields. Howwe use these fields depends on the particular mapping scheme being used. Themapping scheme determines where the data is placed when it is originally copiedinto cache and also provides a method for the CPU to find previously copied datawhen searching cache.Before we discuss these mapping schemes, it is important to understand howdata is copied into cache. Main memory and cache are both divided into the samesize blocks (the size of these blocks varies). When a memory address is generated,cache is searched first to see if the required word exists there. When therequested word is not found in cache, the entire main memory block in which theword resides is loaded into cache. As previously mentioned, this scheme is successfulbecause of the principle of locality—if a word was just referenced, thereis a good chance words in the same general vicinity will soon be referenced aswell. Therefore, one missed word often results in several found words. For example,when you are in the basement and you first need tools, you have a “miss” andmust go to the garage. If you gather up a set of tools that you might need andreturn to the basement, you hope that you’ll have several “hits” while working onyour home improvement project and don’t have to make many more trips to thegarage. Because accessing a cache word (a tool already in the basement) is fasterthan accessing a main memory word (going to the garage yet again!), cache memoryspeeds up the overall access time.So, how do we use fields in the main memory address? One field of themain memory address points us to a location in cache in which the data residesif it is resident in cache (this is called a cache hit), or where it is to be placed ifit is not resident (which is called a cache miss). (This is slightly different forassociative mapped cache, which we discuss shortly.) The cache block referencedis then checked to see if it is valid. This is done by associating a valid bitwith each cache block. A valid bit of 0 means the cache block is not valid (wehave a cache miss) and we must access main memory. A valid bit of 1 means itis valid (we may have a cache hit but we need to complete one more step beforewe know for sure). We then compare the tag in the cache block to the tag fieldof our address. (The tag is a special group of bits derived from the main memoryaddress that is stored with its corresponding block in cache.) If the tags arethe same, then we have found the desired cache block (we have a cache hit). Atthis point we need to locate the desired word in the block; this can be doneusing a different portion of the main memory address called the word field. Allcache mapping schemes require a word field; however, the remaining fields aredetermined by the mapping scheme. We discuss the three main cache mappingschemes on the next page.Direct Mapped CacheDirect mapped cache assigns cache mappings using a modular approach. Becausethere are more main memory blocks than there are cache blocks, it should beclear that main memory blocks compete for cache locations. Direct mappingmaps block X of main memory to block Y of cache, mod N, where N is the totalnumber of blocks in cache. For example, if cache contains 10 blocks, then mainmemory block 0 maps to cache block 0, main memory block 1 maps to cacheblock 1, . . . , main memory block 9 maps to cache block 9, and main memoryblock 10 maps to cache block 0. This is illustrated in Figure 6.2. Thus, mainmemory blocks 0 and 10 (and 20, 30, and so on) all compete for cache block 0.You may be wondering, if main memory blocks 0 and 10 both map to cacheblock 0, how does the CPU know which block actually resides in cache block 0 atany given time? The answer is that each block is copied to cache and identified
Being translated, please wait..
Results (Thai) 2:[Copy]
Copied!
For cache to be functional, it must store useful data. However, this data becomes
useless if the CPU can’t find it. When accessing data or instructions, the CPU
first generates a main memory address. If the data has been copied to cache, the
address of the data in cache is not the same as the main memory address. For
example, data located at main memory address 2E3 could be located in the very
first location in cache. How, then, does the CPU locate data when it has been
copied into cache? The CPU uses a specific mapping scheme that “converts” the
main memory address into a cache location.
This address conversion is done by giving special significance to the bits in
the main memory address. We first divide the bits into distinct groups we call
fields. Depending on the mapping scheme, we may have two or three fields. How
we use these fields depends on the particular mapping scheme being used. The
mapping scheme determines where the data is placed when it is originally copied
into cache and also provides a method for the CPU to find previously copied data
when searching cache.
Before we discuss these mapping schemes, it is important to understand how
data is copied into cache. Main memory and cache are both divided into the same
size blocks (the size of these blocks varies). When a memory address is generated,
cache is searched first to see if the required word exists there. When the
requested word is not found in cache, the entire main memory block in which the
word resides is loaded into cache. As previously mentioned, this scheme is successful
because of the principle of locality—if a word was just referenced, there
is a good chance words in the same general vicinity will soon be referenced as
well. Therefore, one missed word often results in several found words. For example,
when you are in the basement and you first need tools, you have a “miss” and
must go to the garage. If you gather up a set of tools that you might need and
return to the basement, you hope that you’ll have several “hits” while working on
your home improvement project and don’t have to make many more trips to the
garage. Because accessing a cache word (a tool already in the basement) is faster
than accessing a main memory word (going to the garage yet again!), cache memory
speeds up the overall access time.
So, how do we use fields in the main memory address? One field of the
main memory address points us to a location in cache in which the data resides
if it is resident in cache (this is called a cache hit), or where it is to be placed if
it is not resident (which is called a cache miss). (This is slightly different for
associative mapped cache, which we discuss shortly.) The cache block referenced
is then checked to see if it is valid. This is done by associating a valid bit
with each cache block. A valid bit of 0 means the cache block is not valid (we
have a cache miss) and we must access main memory. A valid bit of 1 means it
is valid (we may have a cache hit but we need to complete one more step before
we know for sure). We then compare the tag in the cache block to the tag field
of our address. (The tag is a special group of bits derived from the main memory
address that is stored with its corresponding block in cache.) If the tags are
the same, then we have found the desired cache block (we have a cache hit). At
this point we need to locate the desired word in the block; this can be done
using a different portion of the main memory address called the word field. All
cache mapping schemes require a word field; however, the remaining fields are
determined by the mapping scheme. We discuss the three main cache mapping
schemes on the next page.
Direct Mapped Cache
Direct mapped cache assigns cache mappings using a modular approach. Because
there are more main memory blocks than there are cache blocks, it should be
clear that main memory blocks compete for cache locations. Direct mapping
maps block X of main memory to block Y of cache, mod N, where N is the total
number of blocks in cache. For example, if cache contains 10 blocks, then main
memory block 0 maps to cache block 0, main memory block 1 maps to cache
block 1, . . . , main memory block 9 maps to cache block 9, and main memory
block 10 maps to cache block 0. This is illustrated in Figure 6.2. Thus, main
memory blocks 0 and 10 (and 20, 30, and so on) all compete for cache block 0.
You may be wondering, if main memory blocks 0 and 10 both map to cache
block 0, how does the CPU know which block actually resides in cache block 0 at
any given time? The answer is that each block is copied to cache and identified
Being translated, please wait..
Results (Thai) 3:[Copy]
Copied!
สำหรับแคช เป็นหน้าที่ต้องเก็บข้อมูลที่เป็นประโยชน์ อย่างไรก็ตาม ข้อมูลนี้จะกลายเป็น
ไม่มีประโยชน์ถ้า CPU ไม่พบมัน เมื่อมีการเข้าถึงข้อมูลหรือใช้ CPU
แรกจะสร้างที่อยู่หน่วยความจำหลัก หากข้อมูลที่ถูกคัดลอกไปยังแคช
ที่อยู่ของข้อมูลในแคชจะไม่เหมือนกับที่อยู่ในหน่วยความจำหลัก สำหรับ
ตัวอย่าง ข้อมูลอยู่ที่หน่วยความจำหลัก 2e3 ที่อยู่อาจจะตั้งอยู่ในมาก
สถานที่แรกในแคช อย่างไร แล้ว CPU หาข้อมูลเมื่อได้รับ
คัดลอกลงในแคช ? ซีพียูที่ใช้โครงร่างแผนที่เฉพาะที่ " แปลง "
หน่วยความจําหลักที่อยู่ในตำแหน่งแคช
ที่อยู่แปลงเสร็จ โดยให้ความสำคัญเป็นพิเศษกับบิต
ที่อยู่หน่วยความจำหลัก เราแบ่งออกเป็นกลุ่มที่แตกต่างกันบิตเราเรียก
สาขา ขึ้นอยู่กับรูปแบบการทำแผนที่เราอาจจะมีสอง หรือ สามเขต วิธี
เราใช้ข้อมูลเหล่านี้ขึ้นอยู่กับเฉพาะแผนผังโครงการที่ใช้ โครงการ
แผนที่กำหนดซึ่งข้อมูลที่ถูกวางไว้เมื่อมันเป็นครั้งแรกที่คัดลอก
ในแคช และยัง มีวิธีการสำหรับ CPU เพื่อค้นหาก่อนหน้านี้คัดลอกข้อมูล

เมื่อค้นหาแคช ก่อนที่เราจะหารือโครงร่างแผนที่เหล่านี้ มันเป็นสิ่งสำคัญที่จะเข้าใจว่า
ข้อมูลจะถูกคัดลอกลงในแคชแคชและหน่วยความจำหลักมีทั้งแบ่งออกเป็นบล็อกขนาดเดียวกัน
( ขนาดของบล็อกเหล่านี้จะแตกต่างกัน ) เมื่อที่อยู่หน่วยความจำจะถูกสร้างขึ้น
แคชค้นหาก่อนว่าต้องใช้คำที่มีอยู่แล้ว เมื่อ
ขอคำที่ไม่พบในแคชทั้งหมดบล็อกหน่วยความจำหลักซึ่งใน
คำอยู่จะโหลดลงในแคช ตามที่กล่าวถึงก่อนหน้านี้ โครงการนี้ประสบความสำเร็จ
Being translated, please wait..
 
Other languages
The translation tool support: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bosnian, Bulgarian, Catalan, Cebuano, Chichewa, Chinese, Chinese Traditional, Corsican, Croatian, Czech, Danish, Detect language, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Frisian, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Kinyarwanda, Klingon, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Myanmar (Burmese), Nepali, Norwegian, Odia (Oriya), Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scots Gaelic, Serbian, Sesotho, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Tatar, Telugu, Thai, Turkish, Turkmen, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Welsh, Xhosa, Yiddish, Yoruba, Zulu, Language translation.

Copyright ©2024 I Love Translation. All reserved.

E-mail: