Speed and Power of Computer
The caracteristic of speed
Speed and power of computer |
Computer Processing Speed
Although all computers are fast, there is a wide diversity of computer speeds. The execution of an instruction on a very slow computer may be measured in less than a millisecond, which is one-thousandth of a second. Most computers can execute an instruction measured in microsecond, one-millionth of a second. some modern computers have reached the nanosecond range-one-billionth of a second. Still to be broken is the picosecond barrier-one-trillionth of a second.
Micropocessor speeds are usually expresed in megahertz (MHz), million of machine cycles per second. Thus a personal computer listed at 25MHz has a processor capable of handling 25 million machine cycles per second. A top speed personal computer will be many times faster.
Another measure of computer speed is MIPS, which stand for one million instruction per second. MIPS is often a more accurate measure than clock speed, because some computers can use each tick of the clock more efficiently than others. A third measure of speed is the megaflop, which stands for one million floating-point operations per second. it measures the ability of the computer to perform complex mathematical operations.
Bus Lines
As is so often the case, the computer term bus is borrowed from its common meaning-a mode of transportation. A bus line is a set of parallel electrical paths that internally transport data from one place to another within the computer system. The amount of data that can be carried at one time is called the bus width-the number of electrical paths. The greater the width, the more data can be carried at a time.
In general, the large the word size or bus, the more powerful the computer. A larger bus size means.
- The computer can transfer more data at a time, making the computer faster.
- The computer can reference larger numbers, allowing more memory.
- The computer can support a greater number and variety of instructions.
Microprocessors are sometimes obscurely affixed with notations that indicate their bus size, For example, a 486DX chip has a bus width of 32 bits, whereas a 486SX chip uses a 32-bit bus within the processor but only a 16 bit bus between the processor and memory. A buyer who cares about speed would prefer the DX chip, which carries exactly twice as much data and therefore is twice as fast as the SX chip.
Flash Memory
We have stated that memory is volatile-that it disappears when the power is turned off-hence the need for secondary storage to keep data on a more permanent basis. A long-standing speed problem has been the rate of accessing data from a secondary storage device sch as a disk, a rate significantly slower than internal computer speeds. It seemed unimaginable that data might someday be stored on nonvolatile memory chips-nonvolatile RAM-close at hand. A breakthrough has emerged in the form of nonvolatile flash memory. Flash chips are currently being used in cellular phones and cockpit flight recorders, adn they are replacing disk in some handheld computers.
Flash memory chips are being produced in credit-cardlike packages, which are smaller than a disk drive and require only half the power; that is way they are being used in notebook computers and the handheld personal digital assistants. Manufacturers predict that a 100-megabye flash card will soon sell at the same price as a same-size magnetic disk drive.
Although flash memory is not yet commonplace, it seems likely that it will become a mainstream component. Since data and instructions will be ever-closer to the microprocessor, convension to flash memory chips would have a pivotal impact on a computer's processing speed.
Cache
A chace (pronounched "cash") is a relatively small amount of ver fast memory designed for the specific purpose of speeding up internal transfer of data and software instructions. Think of cache as a selective memory; The data and instruction stored in cache are those that are most recently, and/or most frequently used. When the processor first requests data or instructions, these must be retrived from main memory, which is delivered at a pace that is relatively slow compared to the microprocessor. As they areretrieved, those samedata/instructions are storedin cache.The next time the microprocessor need data or instructions, it look first in cache; if the needed items can be found there, they can be transferred at a rate that far exceeds a trip from main memory. Of course, cache is not big enough to hold everything, so the wanted data or instructions may not be there. But there is agood chance that frequently used items will be in cache. That is, since the most frequently used data and instruction are kept in a handy place, the net result is an improvement in processing speed.
Just how much cache speeds performance depend on a number of factors, including the size of the cache, the speed of the memory chips in the cache, and the software being run. Caching has become such a vital technique that some of the newer microprocessor have cache built in to the processor's design.
RISC Technology : Less Is More
It flies the face computer tradition: Instead of reaching for more variety, more power, more everything-for-everyone, proponents of RISCs-reduced instruction set computers-suggest that we could get by with a little less. In fact, reduced instruction set computers offer only a small suset of instruction; the absence of bells and whistles increases speed. So we have a radical back-to-basics movement in computer design.
RISC supporters say that, on conventional computers (called CISCs, or complex instruction set computers), a hefty chunk of built-in instruction-the instruction set-is rarely used. Those instruction, they note, are underused, inefficient, and impediments to performance. RISC computers, with their stripped-down instruction sets, zip through programs like racing cars-at speed four to ten times those of CISC computers. This is heady stuff for the merchant of speed who want to attract customers by offering more speed for the money.
Parallel Processing
The ultimate speed solution parallel processing, a method of using several processor at the same time.Consider the description of computer processing you have seen so far in this chapter: The processor gets an instruction from memory, acts on it, return processed data to memory, and then repeats the process. This is conventional serial processing.
The problem with the conventional computer is that the single electronic pathway, the bus line, acts like a bottleneck. The computer has a one-track mind because it is restricted to handling one piece of data at a time. For many applications, such as simulating the air flow around an entire airplane in flight, this is an exceedingly inefficient procedure. A better solution? Many processors, each with is own memory unit, working at the same time : parallel processing.
A number of parallel processor are being built adn sold commercially. However, do not look for parallel processing in personal computers just yet. This far, this technology is limited to larger computers.
The future holds some exciting possibilities for computer chips. New speed breakthroughs certainly will continue. One day we may see computers that operate using light (protonics) rather than electricity (electronic) to control their operation. Light travels faster and is less likely to be disrupted by electrical interference. Also, light beams can pass through each other, alleviating some of the problems that occur in the design of electronic components, in which wires should not cross. And would you believe computers that are actually grown as biological cultures? So-called biochips may replace today's silicon chip. As research continues, so will the surprises.
Whetever the design and processing strategy of a computer, its goal is the same: to turn raw input into useful output. Input and coutput are the topics of the next chapter.
sources and references ; Computers tools for an information age. H.L. Capron
0 Response to "Speed and Power of Computer"
Post a Comment