2009年4月9日 星期四

Memory Mapped I/O & Port I/O

Memory-mapped I/O

From Wikipedia, the free encyclopedia

(Redirected from Memory mapped IO)
Jump to: navigation, search

Memory-mapped I/O (MMIO) and port I/O (also called port-mapped I/O or PMIO) are two complementary methods of performing input/output between the CPU and peripheral devices in a computer. Another method, not discussed in this article, is using dedicated I/O processors—commonly known as channels on mainframe computers—that execute their own instructions.

Memory-mapped I/O (not to be confused with memory-mapped file I/O) uses the same address bus to address both memory and I/O devices, and the CPU instructions used to access the memory are also used for accessing devices. In order to accommodate the I/O devices, areas of CPU's addressable space must be reserved for I/O. The reservation might be temporary—the Commodore 64 could bank switch between its I/O devices and regular memory—or permanent. Each I/O device monitors the CPU's address bus and responds to any CPU's access of device-assigned address space, connecting the data bus to a desirable device's hardware register.

Port-mapped I/O uses a special class of CPU instructions specifically for performing I/O. This is generally found on Intel microprocessors, specifically the IN and OUT instructions which can read and write a single byte to an I/O device. I/O devices have a separate address space from general memory, either accomplished by an extra "I/O" pin on the CPU's physical interface, or an entire bus dedicated to I/O.

A device's direct memory access (DMA) is not affected by those CPU-to-device communication methods, especially it is not affected by memory mapping. This is because, by definition, DMA is a memory-to-device communication method, that bypasses the CPU.

Hardware interrupt is yet another communication method between CPU and peripheral devices. However, it is always treated separately for a number of reasons. It is device-initiated, as opposed to above CPU-initiated methods. It is also unidirectional, as information flows only from device to CPU. Lastly, each interrupt line carries itself only one bit of information with a fixed meaning, namely "there is an interrupt".

Contents

[hide]

[edit] Relative merits of the two I/O methods

The main advantage of using port-mapped I/O is on CPUs with a limited addressing capability. Because port-mapped I/O separates I/O access from memory access, the full address space can be used for memory. It is also obvious to a person reading an assembly language program listing when I/O is being performed, due to the special instructions that can only be used for that purpose.

I/O operations can slow the memory access, if the address and data buses are shared. This is because the peripheral device is usually much slower than main memory. In some architectures, port-mapped I/O operates via a dedicated I/O bus, alleviating the problem.[clarification needed]

The advantage of using memory-mapped I/O is that, by discarding the extra complexity that port I/O brings, a CPU requires less internal logic and is thus cheaper, faster and easier to build; this follows the basic tenets of reduced instruction set computing, and is also advantageous in embedded systems. The fact that regular memory instructions are used to address devices also means that all of the CPU's addressing modes are available for the I/O as well as the memory. As 16-bit peripherals have become obsolete and replaced with 32-bit and 64-bit in general use, reserving ranges of memory address space for I/O become less of a problem.

[edit] Example

Consider a simple system built around an 8-bit microprocessor. Such a CPU might provide 16-bit address lines, allowing it to address up to 64 kibibytes (KiB) of memory. On such a system, perhaps the first 32 KiB of address space would be allotted to random access memory (RAM), another 16K to read only memory (ROM) and the remainder to a variety of other devices such as timers, counters, video display chips, sound generating devices, and so forth. The hardware of the system is arranged so that devices on the address bus will only respond to particular addresses which are intended for them; all other addresses are ignored. This is the job of the address decoding circuitry, and it is this that establishes the memory map of the system.

Thus we might end up with a memory map like so:


Device Address range
(hexadecimal)
Size
RAM 0000 - 7FFF 32 KiB
General purpose I/O 8000 - 80FF 256 bytes
Sound controller 9000 - 90FF 256 bytes
Video controller/text-mapped display RAM A000 - A7FF 2 KiB
ROM C000 - FFFF 16 KiB

Note that this memory map contains gaps; that is also quite common.

Assuming the fourth register of the video controller sets the background colour of the screen, the CPU can set this colour by writing a value to the memory location A003 using its standard memory write instruction. Using the same method, glyphs can be displayed on a screen by writing character values into a special area of RAM within the video controller. Prior to cheap RAM that enabled bit-mapped displays, this character cell method was a popular technique for computer video displays (see Text user interface).

[edit] Basic types of address decoding

  • Exhaustive - 1:1 mapping of unique addresses to one hardware register (physical memory location)
  • Partial - n:1 mapping of n unique addresses to one hardware register. Partial decoding allows a memory location to have more than one address, allowing the programmer to reference a memory location using n different addresses. Synonyms: foldback, multiply-mapped, partially-mapped.
  • Linear - Address lines are used directly without any logic

[edit] Incomplete address decoding

Addresses may be decoded completely or incompletely by a device.

  • Complete decoding involves checking every line of the address bus, causing an open data bus when the CPU accesses an unmapped region of memory.
  • Incomplete decoding, or partial decoding, uses simpler and often cheaper logic that examines only some address lines. Such simple decoding circuitry might allow a device to respond to several different addresses, effectively creating virtual copies of the device at different places in the memory map. All of these copies refer to the same real device, so there is no particular advantage in doing this, except to simplify the decoder. Commonly, the decoding itself is programmable, so the system can reconfigure its own memory map as required.

[edit] See also

沒有留言: