Intel 8086

Intel 8086

The 8086 is a 16-bit microprocessor chip designed by Intel and introduced on the market in 1978, which gave rise to the x86 architecture. Intel 8088, released in 1979, was essentially the same chip, but with an external 8-bit data bus (allowing the use of cheaper and fewer supporting logic chips), and is notable as the processor used in the original IBM PC.

History

Background

In 1972, Intel launched the 8008, the first 8-bit microprocessor. It implemented an instruction set designed by Datapoint corporation with programmable CRT terminals in mind, that also proved to be fairly general purpose. The device needed several additional ICs to produce a functional computer, in part due to its small 18-pin "memory-package" which prevented a separate address bus (Intel was primarily a DRAM manufacturer at the time).

Two years later, in 1974, Intel launched the 8080, employing the new 40-pin DIL packages originally developed for calculator ICs to enable a separate address bus. It had an extended instruction set that was source- (not binary-) compatible with the 8008 and also included some 16-bit instructions to make programming easier. The 8080 device, often described as the first truly useful microprocessor, was nonetheless soon replaced by the 8085 which could cope with a single 5V power supply instead of the three voltages of earlier chips. Other well known 8-bit microprocessors that emerged during these years were Motorola 6800 (1974), Microchip PIC16X (1975), MOS Technology 6502 (1975), Zilog Z80 (1976), and Motorola 6809 (1977), as well as others.

The first x86 design

The 8086 was originally intended as a temporary substitute for the ambitious iAPX 432 project in an attempt to draw attention from the less-delayed 16 and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the same time to top the successful Z80 (designed by former Intel employees). Both the architecture and the physical chip were therefore developed quickly (in a little more than two years), using the same basic microarchitecture elements and physical implementation techniques as employed for the one year earlier 8085, which it would also function as a continuation of. Marketed as source compatible, it was designed so that assembly language for the 8085, 8080, or 8008 could be automatically converted into equivalent (sub-optimal) 8086 source code, with little or no hand-editing. This was possible because the programming model and instruction set was (loosely) based on the 8085. However, the 8086 design was expanded to support full 16-bit processing, instead of the fairly basic 16-bit capabilities of the 8080/8085. New kinds of instructions were added as well; self-repeating operations and instructions to better support nested ALGOL-family languages such as Pascal, among others.

The 8086 was sequenced using a mix of random logic and microcode and was implemented using depletion load nMOS circuitry with approximately 20,000 active transistors (29,000 counting all ROM and PLA sites). It was soon moved to a new refined nMOS manufacturing process called HMOS (for High performance MOS) that Intel originally developed for manufacturing of fast static RAM products. This was followed by HMOS-II, HMOS-III, and eventually a CMOS version. The original chip measured 33 mm² and minimum feature size was 3.2 μm.

The architecture was defined by Stephen P. Morse and Bruce Ravenel. Peter A.Stoll was lead engineer of the development team and William Pohlman the manager. While less known than the 8088 chip, the legacy of the 8086 is enduring; references to it can still be found on most modern computers in the form of the Vendor entry for all Intel device IDs which is "8086". It also lent its last two digits to Intel's later extended versions of the design, such as the 286 and the 386, all of which eventually became known as the x86 family.

Details

Buses and operation

All internal registers as well as internal and external data buses were 16 bits wide, firmly establishing the "16-bit microprocessor" identity of the 8086. A 20-bit external address bus gave an 1 MB (segmented) physical address space (220 = 1,048,576). The data bus was multiplexed with the address bus in order to fit a standard 40-pin dual in-line package. 16-bit I/O addresses meant 64 KB of separate I/O space (216 = 65,536). The maximum linear address space were limited to 64 KB, simply because internal registers were only 16 bits wide. Programming over 64 KB boundaries involved adjusting segment registers (see below) and were therefore fairly awkward (and remained so until the 80386).

Some of the control pins, which carry essential signals for all external operations, had more than one function depending upon whether the device was operated in "min" or "max" mode. The former were intended for small single processor systems whilst the latter were for medium or large systems, using more than one processor.

Registers and instructions

The 8086 had eight (more or less general) 16-bit registers including the stack pointer, but excluding the instruction pointer, flag register and segment registers. Four of them (AX,BX,CX,DX) could also be accessed as (twice as many) 8-bit registers (AH,AL,BH,BL, etc), the other four (BP,SI,DI,SP) were 16-bit only (see picture).

Due to a compact encoding inspired by 8085 and other 8-bit processors, most instructions were one-address or two-address operations which means that the result were stored in one of the operands. At most one of the operands could be in memory, but this memory operand could also be the destination, while the other operand, the source, could be either register or immediate. A single memory location could also often be used as both source and destination which, among other factors, further contributed to a code density comparable to (often better than) most eight bit machines.

Although the degree of generality of most registers were much greater than in the 8080 or 8085, it was still fairly low compared to the typical contemporary minicomputer, and registers were also sometimes used implicitly by instructions. While perfectly sensible for the assembly programmer, this complicated register allocation for compilers compared to more regular 16- and 32-bit processors (such as the PDP-11, VAX, 68000, etc); on the other hand, compared to contemporary 8-bit microprocessors (such as the 8085, or 6502), it was significantly easier to generate code for the 8086 design.

As mentioned above 8086 also featured 64 KB of 8-bit (or alternatively 32 K-word or 16-bit) I/O space. A 64 KB (one segment) stack growing towards lower addresses is supported by hardware; 2-byte words are pushed to the stack and the stack top (bottom) is pointed out by SS:SP. There are 256 interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store the return address.

The processor had some new instructions (not present in the 8085) to better support stack based high level programming languages such as Pascal and PL/M; some of the more useful ones were push mem-op, and ret size, supporting the "pascal calling convention". (Several others, such as push immed and enter, would be added in the subsequent 80186, 80286, and 80386 designs.)

Segmentation

There were also four segment registers (CS, DS, SS, ES) that could be set via the normal registers. The segment registers allowed the CPU to access one megabyte of memory in an odd way. Rather than just supplying missing bytes, as in most segmented processors, the 8086 shifted the segment register left 4 bits and added it to the offset address, thus:

physical address = segment×16 + offset

The physical memory address was therefore 20 bits wide (thus 1MB), while both segment addresses and offsets were 16 bits, with 32 bit segment:offset pairs. As a result of this scheme, segments overlapped, making it possible to have up to 4096 different pointers addressing the same location (232-20=4096). While acceptable, and even useful, for assembly language programming (where control of the segments was complete), it caused confusion and was considered poor design by most people, forcing entirely new concepts (near and far keywords) into languages such as Pascal and C; data and/or code could be managed within near 16-bit segments, or a large data structure (or many and/or large procedures) could be accessed by far pointers (32-bit segment:offset pairs), reaching the full physical address space.

The designers actually contemplated using an 8-bit shift (instead of 4-bit), in order to create a 16 MB physical address space. However, as this would have forced segments to begin on 256 byte boundaries, and 1 MB was considered very large for a microprocessor around 1976, the idea was dismissed. Also, there were not enough pins available on a low-cost 40-pin package.''.

Subsequent expansion

Although this scheme may seem an unlikely candidate for future expansion, it was nevertheless expanded by a new MMU-controlled addressing scheme in the 80286's protected mode, where the 16-bit segment registers instead supply a 14-bit index (+ 2-bit privilege level) for an indirect base address (to which the offset is added, as before), giving a virtual address space of 230 bytes (224 bytes physical). Later, and on top of this, 80386 widened the whole general purpose registers set (and hence, the offsets) to 32 bits, thereby enabling a linear (and physical) addressing range of 232 bytes (with a 246 bytes virtual space). However, those chips were, for a long period of time, often used in real mode (no paging), to remain compatible with older OSes.

Porting older software

Small programs could ignore the segmentation and just use plain 16-bit addressing. This allowed 8-bit software to be quite easily ported to the 8086. The authors of MS-DOS took advantage of this by providing an Application Programming Interface very similar to CP/M as well as including the simple .com executable file format, identical to CP/M. This was important when the 8086 and MS-DOS was new, because it allowed many existing CP/M (and other) applications to be quickly made available, greatly easing the acceptance of the new platform.

Performance

Although partly shadowed by other design choices in this particular chip, the multiplexed bus limited performance slightly; transfers of 16-bit or 8-bit quantities were done in a four-clock memory access cycle. As instructions varied from 1 to 6 bytes, fetch and execution were made concurrent (as it remains in today's x86 processors): The bus interface unit fed the instruction stream to the execution unit through a 6 byte prefetch queue (a form of loosely coupled pipelining), speeding up operations on registers and immediates, while memory operations unfortunately became slower (4 years later, this performance problem was fixed with the 80186 and 80286). However, the full (instead of partial) 16-bit architecture with a full width ALU meant that 16-bit arithmetic instructions could now be performed with a single ALU cycle (instead of two, via carry), speeding up such instructions considerably. Combined with orthogonalizations of operations versus operand-types and addressing modes, as well as other enhancements, this made the performance gain over the 8080 or 8085 fairly significant, despite cases where the older chips may be faster (see below).

Execution times for typical instructions (in clock cycles):

Timings are best case, depending on prefetch status, instruction alignment, and other factors.

MOV reg,reg: 2, reg,im: 4, reg,mem: 8+EA, mem,reg: 9+EA,  mem,im: 10+EA cycles
ALU reg,reg: 3, reg,im: 4, reg,mem: 9+EA, mem,reg: 16+EA, mem,im: 17+EA cycles

JMP reg: 11, JMP label: 15, Jcc label: 16 (cc = condition code)

MUL reg: 70..118 cycles
IDIV reg: 101..165 cycles

EA: time to compute effective address, ranging from 5 to 12 cycles.

As can be seen from these tables, operations on registers and immediates were fast (between 2 and 4 cycles), while memory-operand instructions and jumps were quite slow; jumps took more cycles than on the simple 8080 and 8085, and the 8088 (used in the IBM PC) was additionally hampered by its narrower bus. The reasons why most memory related instructions were slow were threefold:

  • Loosely coupled fetch and execution units are efficient for instruction prefetch, but not for jumps and random data access (without special measures).
  • Address calculations were performed by microcode routines without a dedicated adder (using the main ALU), although it had a separate segment + offset adder.
  • The address and data buses were multiplexed, forcing a slightly longer (33~50%) bus cycle than in typical contemporary 8-bit processors.

It should be noted, however, that memory access performance was drastically enhanced with Intel's next generation chips. The 80186 and 80286 both had address calculation in hardware, saving many cycles, and 80286 also had separate (non-multiplexed) address and data buses.

Floating point

The 8086/8088 could be connected to a mathematical coprocessor to add floating point capability. The Intel 8087 was the standard math coprocessor, operating on 80-bit numbers, but manufacturers like Weitek soon offered higher performance alternatives.

Chip versions

The clock frequency was originally limited to 5 MHz (IBM PC used 4.77 MHz, 4/3 the standard NTSC color burst frequency), but the last versions in HMOS were specified for 10 MHz. HMOS-III and CMOS versions were manufactured for a long time (at least a while into the 1990s) for embedded systems, although its successor, the 80186/80188, has been more popular for embedded use.

Derivatives and clones

Compatible and, in many cases, enhanced versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, and AMD. For example, the NEC V20 and NEC V30 pair were hardware compatible with the 8088 and 8086, respectively, but incorporated the instruction set of the 80186 along with some (but not all) of the 80186 speed enhancements, providing an drop-in capability to upgrade both instruction set and processing speed without manufacturers having to modify their designs. Such relatively simple and low power 8086-compatible processors in CMOS are still used in embedded systems.

The electronics industry of the Soviet Union was able to replicate the 8086 through both industrial espionage and reverse engineering. The resulting chip, К1810ВМ86, was pin-compatible with the original Intel 8086 and had the same instruction set. This IC was the core of Soviet-made PC-compatible ES1840 and ES1841 desktops. However, in hardware these computers had significant differences from their authentic prototypes (respectively PC and PC/XT): the K1810BM86 was a copy from Intel 8086, not Intel 8088, and the data/address bus circuitry was designed independently of original Intel products.

Notable bugs

8086/8088 CPUs produced prior to 1982 had a severe interrupt bug. IBM provided an upgrade free of charge to affected PCs. Processors remaining with original 1979 markings are quite rare; they have become collectors' items.

Microcomputers using the 8086

  • One of the most influential microcomputers of all, the IBM PC, used the Intel 8088, a version of the 8086 with an eight-bit data bus (as mentioned above).
  • The first commercial microcomputer built on the basis of the 8086 was the Mycron 2000.
  • The IBM Displaywriter word processing machine and the Wang Professional Computer, manufactured by Wang Laboratories, also used the 8086. Also, this chip could be found in the AT&T 6300 PC (built by Olivetti).
  • The first Compaq Deskpro used an 8086 running at 7.14 MHz, but was capable of running add-in cards designed for the 4.77 MHz IBM PC XT.
  • The FLT86 is a well established training system for the 8086 CPU still being manufactured by Flite Electronics International Limited in Southampton, England
  • The IBM PS/2 models 25 and 30 were built with an 8MHz 8086
  • The Tandy 1000 SL-series machines used 8086 CPUs.

Notes and references

See also

External links

Search another word or see Intel 8086on Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature