Computer Memory and Languages



Computing Fundamental
Getting to Know Computer

                                                                                                                                                             
Objectives:-
The objectives of this session is to understand the following:
1.      Human vs Computer Languages.
2.      Difference between Analogue and Digital Data.
3.      How Computer interprets the human language?
4.      What are ASCII codes?
5.      Types of Computer Languages.
6.      Basic Structure of Computer
7.      Difference between storage and Memory.
8.      Different types of storage and memories.
9.      What is register?
10.  What is cache? 11. What is RAM?
12.  What is ROM?
13.  Why are different types of memories?  

Human vs Computer Language:-

The computer does not understand the language of humans. It of course does not understand English or Arabic or any other human language. Moreover, man cannot communicate with computers, nor can they communicate with one another except by a special language called machine language or Binary language. This language is a very simple one and does not exceed two letters or two digits: 0s and 1s.
A programming language is a formal language endowed with semantics that can be used to control the behaviour of a machine, particularly a computer, to perform specific tasks. Programming languages are defined using syntactic and semantic rules, to determine structure and meaning respectively.


Programming Languages:-
It is an artificial language designed to express computations that can be performed by a computer. Programming languages can be used to create programs that control the behavior of a machine (often a computer), to express algorithms precisely, or as a mode of human communication. Like natural human languages, programming languages conform to rules for syntax and semantics. Machine language is the core code for hardware of any computer system. Whether the user uses the low level or high level languages, it has to be converted to the machine language (1s, 0s) for building computer logic and execution. Any single program can be built using any computer language of any generation. Moreover a single PC can run many program at the same time, developed in different languages. Interpreters and translators are used for inter conversion of these codes of languages to make them understandable by computer users & hardware.
There are thousands of programming languages, new ones are also coming every year and are categorized into generations according to their characteristics, as below:
Machine Languages or Machine Codes:-
1st (First) generation Language (1GL)
Machine understandable languages are executed directly by a computer's central processing unit (CPU) and consists of instructions, written in machine code, that a computer can execute directly. Each machine language statement corresponds to one machine action. Machine language is actually read and understood by the computer. Machine code is the lowest-level language composed of machine code i.e. the combination of ‘Zeros’ & ‘Ones’ called Binary language (or binary codes).

In addition to Binary language, many other machine codes (like hexa-decimal & octal etc.) were introduced but binary machine codes were the only, remained highly successful for computer operations. The computers use these codes to perform different internal functions in OS & hardware drivers etc. It is hardware oriented language and when data travels on the network as ‘Data-Packets’ (as groups or chunks of machine codes) they move as a sequence of ones & zeros. Machine languages are very hard to understand for humans because they consist entirely of numbers (1s, 0s).
The ones & zeros is the representation of electrical signals in the circuitry showing; peak voltage as one & lowest voltage as zero similar to square wave-form like shown in the figure above:
‘One’  can be taken as : Yes - True - ON(as peak voltage)
‘Zero’ can be taken as : No - False - OFF(as lowest voltage
Binary code:-
A code in which each allowable position has one of two possible states, i.e. 0 and 1; the binary number system is one of many numbering systems. It consists of a special code of seven digits, each is either a -0- or -1-. For example, the computer does not understand the letter "A" except as (0000001). Similarly the letter "K" is (0001011), the letter "Z" is (0011010), the number "4" is (0110100), the "?" is (01111110) and so on.
For example, a word that appears on the screen as: PAKISTAN is recognized by computer in the form of series of a group of 0’s & 1’s:

Logic Gates:-
The electronic circuits called ‘Logic Gates, are used as basic building block in manufacturing of
ICs and computers. These logic-gates take 1 or 0 as input and always produce output either as 1 (mean Yes or Truth) or 0 (means ‘No’ or ‘False’) as output. (in the diagram below A,B are the inputs and Q is the output for a logic gate).

Low- Level Languages:-
2nd (Second) generation Languages (2GL):-              
Assembly languages are the next level of programming languages. Each assembly-language statement corresponds to one machine-language statement, but assembly language statements are written in a symbolic code that is comparatively easier for humans to read. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.
High-Level Programming Languages:- 3rd (Third) Generation Languages (3GL):-
High level programming languages (HLL) are in comparison to low-level programming languages, are more abstract, easier to use, or more portable across platforms and are structured programming languages. Examples include languages such as ALGOL, COBOL, BASIC, C, FORTRAN, C++, C#, Pascal, and Java.
4th (Fourth) Generation Languages (4GL):-
A high-level language (HLL) for programming computers does not require detailed knowledge of a specific computer, as a low-level language does. High-level languages do not have to be written for a particular computer, but must be compiled for the computer they will work with. High-level languages are closer to human language than low-level languages, and include statements like GOTO or FOR or NEXT or END which are regular words. They include; Mathematica, MATLAB, NATURAL, Nomad, PL/SQL, Clipper, FoxPro, Panther, PowerBuilder etc.
5th (Fifth) Generation Languages (5GL):-
A very high-level programming language (VHLL) is a programming language with a very high level of abstraction, used primarily as a professional programmer productivity tool. These are limited to a very specific application, purpose, or type of task. For this reason, very high-level programming languages are often referred to as goal-oriented programming languages. Fifthgeneration languages are used mainly in artificial intelligence research; Prolog, OPS5, and Mercury are the examples of fifth-generation languages.
Note: Whatever the language a computer displays as a front-end program, but in the background it uses the machine language only i.e 1s or 0s. 
How Computer Represents Data?
      Computers are digital and use electricity.
      Recognize only two discrete states: on or off
      Use a binary system to recognize two states
      Use Number system with two unique digits: 0 and 1, called bits (short for binary digits).

What is a byte?
      Eight bits grouped together as a unit

      Provides enough different combinations of 0s and 1s to represent 256 individual characters
      Numbers
      Uppercase and lowercase letters
      Punctuation marks

Storage of data (bits & bytes) is often categorized as:
Kilo  (KB)
Roughly 1,000; actually 210 (1,024)
Mega  (MB)
Roughly 1,000,000; actually 220 (1,048,576)
Giga  (GB)
Roughly 1,000,000,000; actually 230 (1,073,741,824)
Tera  (TB)
Roughly 1,000,000,000,000; actually 240 (1,099,511,627,776)
Text Representation:-
What are the popular coding systems to represent text?
      ASCII (8 bit) - American Standard Code for Information Interchange - most common today
      EBCDIC (8 bit) - Extended Binary Coded Decimal Interchange Code – IBM mainframes
      Unicode (16 bit) – newer coding scheme capable of representing all world’s languages


Data Representation refers to the methods used internally to represent information stored in a computer. Computers store lots of different types of information:
      numbers
      text
      graphics of many varieties (stills, video, animation)
      sound
At least, these all seem different to us. However, ALL types of information stored in a computer are stored internally in the same simple format: a sequence of 0's and 1's.
 How can a sequence of 0's and 1's represent things as diverse as your photograph, your favorite song, a recent movie, and your term paper?
It all depends on how we interpret the information. Computers use numeric codes to represent all the information they store. These codes are similar to those you may have used as a child to encrypt secret notes: let 1 stand for A, 2 stand for B, etc. With this code, any written message can be represented numerically. The codes used by computers are a bit more sophisticated, and they are based on the binary number system (base two) instead of the more familiar (for the moment, at least!) decimal system. Computers use a variety of different codes. Some are used for numbers, others for text, and still others for sound and graphics.
Memory Structure in Computer:-
       Memory consists of bits (0 or 1) o a single bit can represent two pieces of information
       bytes (=8 bits) o          a single byte can represent 256 = 2x2x2x2x2x2x2x2 = 28 pieces of information
       words (=2,4, or 8 bytes) o a 2 byte word can represent 2562 pieces of information (approximately 65 thousand).
       Byte addressable - each byte has its own address.
Binary Numbers:-       
Normally we write numbers using digits 0 to 9. This is called base 10. However, any positive integer (whole number) can be easily represented by a sequence of 0's and 1's. Numbers in this form are said to be in base 2 and they are called binary numbers. Base 10 numbers use a positional system based on powers of 10 to indicate their value. The number 123 is really 1 hundred + 2 tens + 3 ones. The value of each position is determined by ever-higher powers of 10, read from left to right. Base 2 works the same way, just with different powers. The number 101 in base 2 is really 1 four + 0 twos + 1 one (which equals 5 in base 10). 
Text:-
Text can be represented easily by assigning a unique numeric value for each symbol used in the text. For example, the widely used ASCII code (American Standard Code for Information Interchange) defines 128 different symbols (all the characters found on a standard keyboard, plus a few extra), and assigns to each a unique numeric code between 0 and 127. In ASCII, an "A" is 65," B" is 66, "a" is 97, "b" is 98, and so forth. When you save a file as "plain text", it is stored using ASCII. ASCII format uses 1 byte per character 1 byte gives only 256 (128 standard and 128 non-standard) possible characters The code value for any character can be converted to base 2, so any written message made up of ASCII characters can be converted to a string of 0's and 1's.
Graphics:-
Graphics that are displayed on a computer screen consist of pixels: the tiny "dots" of color that collectively "paint" a graphic image on a computer screen. The pixels are organized into many rows on the screen. In one common configuration, each row is 640 pixels long, and there are 480 such rows. Another configuration (and the one used on the screens in the lab) is 800 pixels per row with 600 rows, which is referred to as a "resolution of 800x600." Each pixel has two properties: its location on the screen and its color.
A graphic image can be represented by a list of pixels. Imagine all the rows of pixels on the screen laid out end to end in one long row. This gives the pixel list, and a pixel's location in the list corresponds to its position on the screen. A pixel's color is represented by a binary code, and consists of a certain number of bits. In a monochrome (black and white) image, only 1 bit is needed per pixel: 0 for black, 1 for white, for example. A 16 color image requires 4 bits per pixel. Modern display hardware allows for 24 bits per pixel, which provides an astounding array of 16.7 million possible colors for each pixel!

Analog vs Digital Data:-

There are two main types of data that we are going to discuss here: 
      Analogue Data 
      Digital Data
Analogue Data:-
Definition:-
"Analogue data use values that change very smoothly."
A good example of this is an analogue clock. An analogue clock shows the time with a smoothly moving seconds hand. The change is continuous.

Sound is also a good example of analogue data. Sound waves change in a very smooth way. This image shows you an example of a smoothly changing sound wave:

Analogue Devices:-
All analogue devices use analogue data. Examples of analogue devices include:
       Microphone
       Headphones
       Loud Speaker
       Sensors (temperature, pressure etc)

Digital Data:-
Definition:-
"Digital data jumps from one value to the next in a step by step sequence."
A good example of this is a digital clock. A digital clock jumps from one second to another in clear steps. The change is not smooth or continuous.

All digital devices use digital data. Examples of digital devices include:
       Computers/Laptops/IPads
       Mobile Phone
       MP3 Player
       Digital Camera
The name "Digital" is given to all devices that store and process data in the form of 'digits' (numbers).
These digits are known as 'Binary'

Analogue and Digital Conversion:-
Analogue values can only be used by analogue devices. Digital values can only be used by digital devices.
If we want to use analogue values with a digital device or digital values with an analogue device we need to use data conversion.
There are two types of data converters: 
1.      Analogue to Digital Converter (ADC) 
2.      Digital to Analogue Converter (DAC)
Analogue to Digital Converter (ADC):-                                                                                             
If we try to attach an analogue device (like a microphone) to a computer we will need to convert the analogue data to digital before the computer can use it.
The microphone is used to pass the analogue sound waves through the ADC which will convert the sound from analogue to digital.
The ADC then passes the converted digital data into the computer where the sound can be stored and edited.
The image below will help explain this process: 

In this example the ADC that converts the analogue values to digital would be the computer's sound card.
Digital to Analogue Converter (DAC):-
If we want to listen to digital music (like mp3's) we would need to attach an analogue device such as loud speakers or headphones to our computer.
The computer will pass the digital sound values through a DAC (located on a sound card) which will convert the digital data to analogue.
The DAC then passes the converted analogue data onto the analogue loud speaker which we would then hear as sound waves.
The image below will help explain this process: 

            
Computer sound cards can perform both types of data conversion (ADC and DAC).
Another example of Data Conversion:- 
Imagine we had a greenhouse and we wanted a way to control the temperature inside automatically. We could do this using a range of analogue and digital devices and ADC's/DAC's to convert all of the data.
This is how it would work:
1.   Analogue thermometer is used to gather smoothly changing temperature data 
2.   Analogue data is converted to digital using a ADC and fed into a digital computer 3. Computer reads the digital data and decides if the temperature is too hot or too cold
4. Computer sends data to a DAC built into a heater with 1 of 2 instructions: 
If the temperature is too hot, the heater will be turned off If the temperature is too cold the heater will be turned on.
Look at the image below for an example:


The Standard ASCII Character Set:-
Bytes are frequently used to hold individual characters in a text document. In the ASCII character set, each binary value between 0 and 127 is given a specific character. Most computers extend the ASCII character set to use the full range of 256 characters available in a byte. The upper 128 characters handle special things like accented characters from common foreign languages.
You can see the 127 standard ASCII (American Standard Code for Information Interchange) codes below. Computers store text documents, both on disk and in memory, using these codes. For example, if you use Notepad in Windows 95/98 to create a text file containing the words, "Four score and seven years ago," Notepad would use 1 byte of memory per character (including 1 byte for each space character between the words -- ASCII character 32). When Notepad stores the sentence in a file on disk, the file will also contain 1 byte per character and per space.
Try this experiment: Open up a new file in Notepad and insert the sentence, "Four score and seven years ago" in it. Save the file to disk under the name getty.txt. Then use the explorer and look at the size of the file. You will find that the file has a size of 32 bytes on disk: 1 byte for each character. If you add another word to the end of the sentence and re-save it, the file size will jump to the appropriate number of bytes. Each character consumes a byte.
If you were to look at the file as a computer looks at it, you would find that each byte contains not a letter but a number -- the number is the ASCII code corresponding to the character (see below). So on disk, the numbers for the file look like this:
F   o   u   r     a   n   d      s   e   v   e   n

70 111 117 114 32 97 110 100 32 115 101 118 101 110

By looking in the ASCII table, you can see a one-to-one correspondence between each character and the ASCII code used. Note the use of 32 for a space -- 32 is the ASCII code for a space. We could expand these decimal numbers out to binary numbers (so 32 = 00100000) if we wanted to be technically correct -- that is how the computer really deals with things.
The first 32 values (0 through 31) are codes for things like carriage return and line feed. The space character is the 33rd value, followed by punctuation, digits, uppercase characters and lowercase characters.
Binary math works just like decimal math, except that the value of each bit can be only 0 or 1. To get a feel for binary math, let's start with decimal addition and see how it works. Assume that we want to add 452 and 751:
452
                             + 751
---

1203

To add these two numbers together, you start at the right: 2 + 1 = 3. No problem. Next, 5 + 5 = 10, so you save the zero and carry the 1 over to the next place. Next, 4 + 7 + 1 (because of the carry) = 12, so you save the 2 and carry the 1. Finally, 0 + 0 + 1 = 1. So the answer is 1203.
Binary addition works exactly the same way:
010
                             + 111
---

1001

Starting at the right, 0 + 1 = 1 for the first digit. No carrying there. You've got 1 + 1 = 10 for the second digit, so save the 0 and carry the 1. For the third digit, 0 + 1 + 1 = 10, so save the zero and carry the 1. For the last digit, 0 + 0 + 1 = 1. So the answer is 1001. If you translate everything over to decimal you can see it is correct: 2 + 7 = 9.
The ASCII Table is given below:



Types of Computer Languages:-

Computer Languages:-
In all over the world, language is the source of communication among human beings. Different countries/regions have different languages. Similarly, in order to communicate with the computer user also needs to have a language that should be understood by the computer. For this purpose, different languages are developed for performing different types of work on the computer. Basically, languages are divided into two categories according to their interpretation.
1.  Low Level Languages.
2.  High Level Languages.
Low Level Languages:-
Low level computer languages are machine codes or close to it. Computer cannot understand instructions given in high level languages or in English. It can only understand and execute instructions given in the form of machine language i.e. language of 0 and 1. There are two types of low level languages:
       Machine Language.
       Assembly Language

Machine Language:-

It is the lowest and most elementary level of Programming language and was the first type of programming language to be developed. Machine Language is basically the only language which computer can understand. In fact, a manufacturer designs a computer to obey just one Language, its machine code, which is represented inside the computer by a String of binary digits (bits) 0 and 1. The symbol 0 stands for the absence of Electric pulse and 1 for the presence of an electric pulse. Since a computer is Capable of recognizing electric signals, therefore, it understand machine Language.
Advantages of Machine Language:-
      It makes fast and efficient use of the computer.
      It requires no translator to translate the code i.e. directly understood by the computer.
Disadvantages of Machine Language:
      All operation codes have to be remembered
      All memory addresses have to be remembered.
      It is hard to amend or find errors in a program written in the machine language
      These languages are machine dependent i.e. a particular Machine language can be used on only one type of computer
Assembly Language:-
It was developed to overcome some of the many inconveniences of machine language. This is another low level but a very important language in which operation codes and operands are given in the form of alphanumeric symbols instead of 0’s and l’s. These alphanumeric symbols will be known as mnemonic codes and can have maximum up to 5 letter combination e.g. ADD for addition, SUB for subtraction, START, LABEL etc. Because of this feature it is also known as ‘Symbolic Programming Language’. This language is also very difficult and needs a lot of practice to master it because very small
English support is given to this language. The language mainly helps in compiler orientations. The instructions of the Assembly language will also be converted to machine codes by language translator to be executed by the computer.
Advantages of Assembly Language:-
      It is easier to understand and use as compared to machine language.
      It is easy to locate and correct errors.
      It is modified easily
Disadvantages of Assembly Language:-
      Like machine language it is also machine dependent.
      Since it is machine dependent therefore programmer should have the knowledge of the hardware also.
High Level Languages:-
High level computer languages give formats close to English language and the purpose of developing high level languages is to enable people to write programs easily and in their own native language environment (English). High-level languages are basically symbolic languages that use English words and/or mathematical symbols rather than mnemonic codes. Each instruction in the high level language is translated into many machine language instructions thus showing oneto-many translation
Types of High Level Languages:-
Many languages have been developed for achieving different variety of tasks, some are fairly specialized others are quite general purpose. These are categorized according to their use as:
a) Algebraic Formula-Type Processing. These languages are oriented towards the computational procedures for solving mathematical and statistical problem.
Examples are
        BASIC (Beginners All Purpose Symbolic Instruction Code).
        FORTRAN (Formula Translation).
        PL/I (Programming Language, Version 1).
        ALGOL (Algorithmic Language).
        APL (A Programming Language).
b) Business Data Processing:
These languages emphasize their capabilities for maintaining data processing procedures and files handling problems. Examples are:
         COBOL (Common Business Oriented Language).
         RPG (Report Program Generator
b) String and List Processing: These are used for string manipulation including search for patterns, inserting and deleting characters. Examples are:
        LISP (List Processing).
        Prolog (Program in Logic).
Object Oriented Programming Language
In OOP, the computer program is divided into objects. Examples are:
        C++
        Java
e) Visual programming language: these are designed for building Windows-based applications Examples are:
       Visual Basic
       Visual Java
       Visual C
Advantages of High Level Language:-
Following are the advantages of a high level language:
       User-friendly
       Similar to English with vocabulary of words and symbols     Therefore it is easier to learn.
       They require less time to write.
       They are easier to maintain.
       Problem oriented rather than 'machine' based.
       Program written in a high-level language can be translated into many machine language and therefore can run on any computer for which there exists an appropriate translator.
       It is independent of the machine on which it is used i.e. Programs developed in high level language can be run on any Computer
Disadvantages of High Level Language:-
       A high-level language has to be translated into the machine language by a translator and thus a price in computer time is paid.
       The object code generated by a translator might be inefficient Compared to an equivalent assembly language program
             

Basic Structure of a Computer:-

The following figure shows the block diagram of the basic functional units of a computer.

A computer consists of three main parts:
      A processor (CPU)
      A main-memory system
      An I/O system

The Difference between Memory and Storage:-

People often confuse the terms memory and storage, especially when describing the amount they have of each. The term memory refers to the amount of RAM installed in the computer, whereas the term storage refers to the capacity of the computer’s hard disk. To clarify this common mix-up, it helps to compare your computer to an office that contains a desk and a file cabinet.
The file cabinet represents the computer's hard disk, which provides storage for all the files and information you need in your office. When you come in to work, you take out the files you need from storage and put them on your desk for easy access while you work on them. The desk is like memory in the computer: it holds the information and data you need to have handy while you're working.

Consider the desk-and-file-cabinet metaphor for a moment. Imagine what it would be like if every time you wanted to look at a document or folder you had to retrieve it from the file drawer. It would slow you down tremendously, not to mention drive you crazy. With adequate desk space – our metaphor for memory – you can lay out the documents in use and retrieve information from them immediately, often with just a glance.

Here’s another important difference between memory and storage: the information stored on a hard disk remains intact even when the computer is turned off. However, any data held in
memory is lost when the computer is turned off. In our desk space metaphor, it’s as though any files left on the desk at closing time will be thrown away.
How Computer Memory Works?
Although memory is technically any form of electronic storage, it is used most often to identify fast, temporary forms of storage. If your computer's CPU had to constantly access the hard drive to retrieve every piece of data it needs, it would operate very slowly. When the information is kept in memory, the CPU can access it much more quickly. Most forms of memory are intended to store data temporarily.

As you can see in the diagram above, the CPU accesses memory according to a distinct hierarchy. Whether it comes from permanent storage (the hard drive) or input (the keyboard), most data goes in random access memory (RAM) first. The CPU then stores pieces of data it will need to access, often in a cache, and maintains certain special instructions in the register. We'll talk about cache and registers laer.
All of the components in your computer, such as the CPU, the hard drive and the operating system, work together as a team, and memory is one of the most essential parts of this team. From the moment you turn your computer on until the time you shut it down, your CPU is constantly using memory. Let's take a look at a typical scenario:
You turn the computer on.
      The computer loads data from read-only memory (ROM) and performs a power-on selftest (POST) to make sure all the major components are functioning properly. As part of this test, the memory controller checks all of the memory addresses with a quick read/write operation to ensure that there are no errors in the memory chips. Read/write means that data is written to a bit and then read from that bit.
      The computer loads the basic input/output system (BIOS) from ROM. The BIOS provides the most basic information about storage devices, boot sequence, security, Plug and Play (auto device recognition) capability and a few other items.
      The computer loads the operating system (OS) from the hard drive into the system's RAM. Generally, the critical parts of the operating system are maintained in RAM as long as the computer is on. This allows the CPU to have immediate access to the operating system, which enhances the performance and functionality of the overall system.
      When you open an application, it is loaded into RAM. To conserve RAM usage, many applications load only the essential parts of the program initially and then load other pieces as needed.
      After an application is loaded, any files that are opened for use in that application are loaded into RAM.
      When you save a file and close the application, the file is written to the specified storage device, and then it and the application are purged from RAM.
In the list above, every time something is loaded or opened, it is placed into RAM. This simply means that it has been put in the computer's temporary storage area so that the CPU can access that information more easily. The CPU requests the data it needs from RAM, processes it and writes new data back to RAM in a continuous cycle. In most computers, this shuffling of data between the CPU and RAM happens millions of times every second. When an application is closed, it and any accompanying files are usually purged (deleted) from RAM to make room for new data. If the changed files are not saved to a permanent storage device before being purged, they are lost.

One common question about desktop computers that comes up all the time is, "Why does a computer need so many memory systems?"

Types of Computer Memory:-
A typical computer has:
      Level 1 and level 2 caches
      Normal system RAM
      Virtual memory
      A hard disk

Why so many? The answer to this question can teach you a lot about memory!

Memory Management:-

Fast, powerful CPUs need quick and easy access to large amounts of data in order to maximize their performance. If the CPU cannot get to the data it needs, it literally stops and waits for it. Modern CPUs running at speeds of about 1 gigahertz can consume massive amounts of data -- potentially billions of bytes per second. The problem that computer designers face is that memory that can keep up with a 1-gigahertz CPU is extremely expensive -- much more expensive than anyone can afford in large quantities.
Computer designers have solved the cost problem by "tiering" memory -- using expensive memory in small quantities and then backing it up with larger quantities of less expensive memory.
The cheapest form of read/write memory in wide use today is the hard disk. Hard disks provide large quantities of inexpensive, permanent storage. You can buy hard disk space for pennies per megabyte, but it can take a good bit of time (approaching a second) to read a megabyte off a hard disk. Because storage space on a hard disk is so cheap and plentiful, it forms the final stage of a CPUs memory hierarchy, called virtual memory.
The next level of the hierarchy is RAM. 
The bit size of a CPU tells you how many bytes of information it can access from RAM at the same time. For example, a 16-bit CPU can process 2 bytes at a time (1 byte = 8 bits, so 16 bits = 2 bytes), and a 64-bit CPU can process 8 bytes at a time.
Megahertz (MHz) is a measure of a CPU's processing speed, or clock cycle, in millions per second. So, a 32-bit 800-MHz Pentium III can potentially process 4 bytes simultaneously, 800 million times per second (possibly more based on pipelining)! The goal of the memory system is to meet those requirements.
A computer's system RAM alone is not fast enough to match the speed of the CPU. That is why you need a cache (discussed later). However, the faster RAM is, the better. Most chips today operate with a cycle rate of 50 to 70 nanoseconds. The read/write speed is typically a function of the type of RAM used, such as DRAM, SDRAM, RAMBUS. We will talk about these various types of memory later.
First, let's talk about system RAM.
System RAM:-
System RAM speed is controlled by bus width and bus speed. Bus width refers to the number of bits that can be sent to the CPU simultaneously, and bus speed refers to the number of times a group of bits can be sent each second. A bus cycle occurs every time data travels from memory to the CPU. For example, a 100-MHz 32-bit bus is theoretically capable of sending 4 bytes (32 bits divided by 8 = 4 bytes) of data to the CPU 100 million times per second, while a 66-MHz 16-bit bus can send 2 bytes of data 66 million times per second. If you do the math, you'll find that simply changing the bus width from 16 bits to 32 bits and the speed from 66 MHz to 100 MHz in our example allows for three times as much data (400 million bytes versus 132 million bytes) to pass through to the CPU every second.

In reality, RAM doesn't usually operate at optimum speed. Latency changes the equation radically.
Latency refers to the number of clock cycles needed to read a bit of information. For example, RAM rated at 100 MHz is capable of sending a bit in 0.00000001 seconds, but may take 0.00000005 seconds to start the read process for the first bit. To compensate for latency, CPUs uses a special technique called burst mode.
Burst mode depends on the expectation that data requested by the CPU will be stored in sequential memory cells. The memory controller anticipates that whatever the CPU is working on will continue to come from this same series of memory addresses, so it reads several consecutive bits of data together. This means that only the first bit is subject to the full effect of latency; reading successive bits takes significantly less time. The rated burst mode of memory is normally expressed as four numbers separated by dashes. The first number tells you the number of clock cycles needed to begin a read operation; the second, third and fourth numbers tell you how many cycles are needed to read each consecutive bit in the row, also known as the word line. For example: 5-1-1-1 tells you that it takes five cycles to read the first bit and one cycle for each bit after that. Obviously, the lower these numbers are, the better the performance of the memory.
Burst mode is often used in conjunction with pipelining, another means of minimizing the effects of latency. Pipelining organizes data retrieval into a sort of assembly-line process. The memory controller simultaneously reads one or more words from memory, sends the current word or words to the CPU and writes one or more words to memory cells. Used together, burst mode and pipelining can dramatically reduce the lag caused by latency.
So why wouldn't you buy the fastest, widest memory you can get? The speed and width of the memory's bus should match the system's bus. You can use memory designed to work at 100 MHz in a 66-MHz system, but it will run at the 66-MHz speed of the bus so there is no advantage, and 32-bit memory won't fit on a 16-bit bus. 
Even with a wide and fast bus, it still takes longer for data to get from the memory card to the CPU than it takes for the CPU to actually process the data. That's where caches come in.
Cache and Registers:-
       Caches are designed to alleviate this bottleneck by making the data used most often by the CPU instantly available. This is accomplished by building a small amount of memory, known as primary or level 1 cache, right into the CPU. Level 1 cache is very small, normally ranging between 2 kilobytes (KB) and 64 KB.
       The secondary or level 2 cache typically resides on a memory card located near the CPU. The level 2 cache has a direct connection to the CPU. A dedicated integrated circuit on the motherboard, the L2 controller, regulates the use of the level 2 cache by the CPU. Depending on the CPU, the size of the level 2 cache ranges from 256 KB to 2 megabytes
(MB). In most systems, data needed by the CPU is accessed from the cache approximately 95 percent of the time, greatly reducing the overhead needed when the CPU has to wait for data from the main memory.
       Some inexpensive systems dispense with the level 2 cache altogether. Many high performance CPUs now have the level 2 cache actually built into the CPU chip itself. Therefore, the size of the level 2 cache and whether it is onboard (on the CPU) is a major determining factor in the performance of a CPU.
       A particular type of RAM, static random access memory (SRAM), is used primarily for cache. SRAM uses multiple transistors, typically four to six, for each memory cell. It has an external gate array known as a bistable multivibrator that switches between two states. This means that it does not have to be continually refreshed like DRAM. Each cell will maintain its data as long as it has power. Without the need for constant refreshing, SRAM can operate extremely quickly. But the complexity of each cell make it prohibitively expensive for use as standard RAM.
       The SRAM in the cache can be asynchronous or synchronous. Synchronous SRAM is designed to exactly match the speed of the CPU, while asynchronous is not. That little bit of timing makes a difference in performance. Matching the CPU's clock speed is a good thing, so always look for synchronized SRAM. 
       The final step in memory is the registers. These are memory cells built right into the CPU that contain specific data needed by the CPU, particularly the arithmetic and logic unit (ALU). An integral part of the CPU itself, they are controlled directly by the compiler that sends information for the CPU to process.

A Simple Example: Before Cache
Caching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly.
To understand the basic idea behind a cache system, let's start with a super-simple example that uses a librarian to demonstrate caching concepts. Let's imagine a librarian behind his desk. He is there to give you the books you ask for. For the sake of simplicity, let's say you can't get the books yourself -- you have to ask the librarian for any book you want to read, and he fetches it for you from a set of stacks in a storeroom (the library of congress in Washington, D.C., is set up this way). First, let's start with a librarian without cache.
The first customer arrives. He asks for the book Moby Dick. The librarian goes into the storeroom, gets the book, returns to the counter and gives the book to the customer. Later, the client comes back to return the book. The librarian takes the book and returns it to the storeroom. He then returns to his counter waiting for another customer. Let's say the next customer asks for Moby Dick (you saw it coming...). The librarian then has to return to the storeroom to get the book he recently handled and give it to the client. Under this model, the librarian has to make a complete round trip to fetch every book -- even very popular ones that are requested frequently. Is there a way to improve the performance of the librarian?
Yes, there's a way -- we can put a cache on the librarian. In the next section, we'll look at this same example but this time, the librarian will use a caching system.

A Simple Example: After Cache

Let's give the librarian a backpack into which he will be able to store 10 books (in computer terms, the librarian now has a 10-book cache). In this backpack, he will put the books the clients return to him, up to a maximum of 10. Let's use the prior example, but now with our new-and-improved caching librarian.
The day starts. The backpack of the librarian is empty. Our first client arrives and asks for Moby Dick. No magic here -- the librarian has to go to the storeroom to get the book. He gives it to the client. Later, the client returns and gives the book back to the librarian. Instead of returning to the storeroom to return the book, the librarian puts the book in his backpack and stands there (he checks first to see if the bag is full -- more on that later). Another client arrives and asks for Moby Dick. Before going to the storeroom, the librarian checks to see if this title is in his backpack. He finds it! All he has to do is take the book from the backpack and give it to the client. There's no journey into the storeroom, so the client is served more efficiently.
What if the client asked for a title not in the cache (the backpack)? In this case, the librarian is less efficient with a cache than without one, because the librarian takes the time to look for the book in his backpack first. One of the challenges of cache design is to minimize the impact of cache searches, and modern hardware has reduced this time delay to practically zero. Even in our simple librarian example, the latency time (the waiting time) of searching the cache is so small compared to the time to walk back to the storeroom that it is irrelevant. The cache is small (10 books), and the time it takes to notice a miss is only a tiny fraction of the time that a journey to the storeroom takes.
From this example you can see several important facts about caching:
       Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type.
       When using a cache, you must check the cache to see if an item is in there. If it is there, it's called a cache hit. If not, it is called a cache miss and the computer must wait for a round trip from the larger, slower memory area.
       A cache has some maximum size that is much smaller than the larger storage area.
       It is possible to have multiple layers of cache. With our librarian example, the smaller but faster memory type is the backpack, and the storeroom represents the larger and slower memory type. This is a one-level cache. There might be another layer of cache consisting of a shelf that can hold 100 books behind the counter. The librarian can check the backpack, then the shelf and then the storeroom. This would be a two-level cache.

How Virtual Memory Works?
Virtual memory is a common part of most operating systems on desktop computers. It has become so common because it provides a big benefit for users at a very low cost.
In this session, you will learn exactly what virtual memory is, what your computer uses it for and how to configure it on your own machine to achieve optimal performance.
Most computers today have something like 32 or 64 megabytes of RAM available for the CPU to use (see How RAM Works for details on RAM). Unfortunately, that amount of RAM is not enough to run all of the programs that most users expect to run at once.
For example, if you load the operating system, an e-mail program, a Web browser and word processor into RAM simultaneously, 32 megabytes is not enough to hold it all. If there were no such thing as virtual memory, then once you filled up the available RAM your computer would have to say, "Sorry, you cannot load any more applications. Please close another application to load a new one." With virtual memory, what the computer can do is look at RAM for areas that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application.
Because this copying happens automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it only has 32 megabytes installed. Because hard disk space is so much cheaper than RAM chips, it also has a nice economic benefit. 
The read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously -- then, the only time you "feel" the slowness of virtual memory is when there's a slight pause when you're changing tasks. When that's the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.
The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. On a Windows machine, page files have a .SWP extension.

Challenge! 
Differentiate between SRAM AND DRAM Memory?

Difference between RAM and ROM:-
There is one major difference between a ROM and a RAM chip. A ROM chip is non-
volatile storage and does not require a constant source of power to retain information stored on it. When power is lost or turned off, a ROM chip will keep the information stored on it. In contrast, a RAM chip is volatile and requires a constant source of power to retain information. When power is lost or turned off, a RAM chip will lose the information stored on it.
Other differences between a ROM and a RAM chip include:
       A ROM chip is used primarily in the startup process of a computer, whereas a RAM chip is used in the normal operations of a computer after starting up and loading the operating system.
       Writing data to a ROM chip is a slow process, whereas writing data to a RAM chip is a faster process
       A RAM chip can store multiple gigabytes (GB) of data, up to 16 GB or more per chip; A ROM chip typically stores only several megabytes (MB) of data, up to 4 MB or more per chip

Computer ROM:-
A good example of ROM in the computer is the computer BIOS, a PROM chip that stores the programming needed to begin the initial computer start up process. Using a non-volatile storage is the only way to begin the start-up process for computers and other devices that use a similar start up process.

THE END!

No comments:

Post a Comment