One of the main advantages of threaded code is that it is very compact, compared to code generated by alternative code generation techniques and alternative calling conventions.
This advantage usually comes at the expense of slower execution speed.
However, sometimes there is a synergistic effect -- sometimes more compact code is smaller and faster than non-threaded code.
A program small enough to fit entirely in random-access memory may run faster than a less-compact program in virtual memory that requires constant mechanical disk drive access, even though it suffers the threaded code interpretation overhead.
Similarly, a program small enough to fit entirely in a computer processor's cache, may run faster than a less-compact program that suffers constant cache misses.
Threaded code is most well known as the implementation technique commonly used in the Forth programming language. It was also used in early versions of the B programming language, as well as many implementations of BASIC, and some implementations of COBOL and other languages for small minicomputers.
History leading to threaded code
Early computers had relatively little memory. For example, most Data General Nova, IBM 1130, and many Apple II computers had only 4 K words of RAM installed. Consequently a lot of time was spent trying to find ways to reduce the size of programs so they would fit in the memory available.
Instead of writing out every step of an operation in every part of the program where it was needed, programmers saved memory by writing out every step of such operations exactly once (once and only once) and placing it in a subroutine.
This process -- code refactoring -- is still commonly used today by all programmers, although today they do it for different reasons.
The top-level application usually consists of nothing but subroutine calls. And many of those subroutines, in turn, also consist of nothing but subroutine calls.
Some early computers such as the RCA 1802 required several instructions to call a subroutine. In the top-level application and in many subroutines, that sequence is repeated over and over again, only the subroutine address changing from one call to the next. Using expensive memory to store the subroutine-call instructions over and over again seemed wasteful.
Threaded code was invented to save this space.
To save space, programmers squeezed those lists of subroutine calls into simple lists of subroutine addresses (leaving out the "call" instruction on each one), and used a small interpreter (later called a virtual machine) to call each subroutine in turn. This is called Direct Threaded Code (DTC).
Charles H. Moore invented an even more compact notation in 1970 for his Forth virtual machine: indirect threaded code (ITC). Originally, Moore invented this because it was easy and fast on NOVA minicomputers, which have an indirection bit in every address.
He said (in published remarks, Byte Magazine's Forth Issue) that he found it so convenient that he propagated it into all later Forth designs.
Some Forth compilers compile Forth programs into direct-threaded code, while others make indirect-threaded code. The programs act the same either way.
Not content to stop there, programmers have developed other related techniques to make programs even more compact, although they are slower than threaded code.
Practically all executable threaded code uses one or another of these methods for calling subroutines (each method is called a "threading model").
The list of addresses in code point to an address at the start of a data area. The address at the start of the data area points at machine code to use the data. This "extra" pointer in indirect threading serves as an "executable data type" to define custom interpreters for data. The address at the start of each data area points to the code shared by all data areas of that type. More compact, yet slightly slower than direct-threaded code. Faster than byte code. No type interpretation is usually needed. Older Forth systems produced indirect-threaded code.
The addresses in the code are actually the address of machine language. This is a compromise between speed and space. The indirect data pointer is lost, at some loss in the language's flexibility, and this may need to be corrected by a type tag in the data areas, with an auxiliary table. Some Forth systems produce direct-threaded code. On many machines direct-threading is faster than subroutine threading (see reference below).
So-called "subroutine-threaded code" consists of a series of native machine language "call" instructions. This is not threaded code in the original sense, since the instructions are directly executed, without any interpretation. Early compilers for ALGOL, Fortran, Cobol and some Forth systems often produced subroutine-threaded code. The code in many of these systems operated on a last-in-first-out (LIFO) stack of operands, which had well-developed compiler theory. Many people believe that this is the fastest threading model, since many modern processors have special hardware support for such subroutine "call" instructions. But according to measurements by Anton Ertl, "in contrast to popular myths, subroutine threading is usually slower than direct threading.
Ertl's most recent tests show that direct threading is the fastest threading model on Xeon, Opteron, and Athlon processors; indirect threading is the fastest threading model on Pentium M processors; and subroutine threading is the fastest threading model on Pentium 4, Pentium III, and PPC processors.
Token threaded code uses lists of 8 or 12-bit indexes to a table of pointers. Token threaded code is notably compact, without much special effort by a programmer. It is usually half to three-fourths the size of other threaded-codes, which are themselves a quarter to an eighth the size of compiled code. The table's pointers can either be indirect or direct. Some Forth compilers produce token threaded code. Some programmers consider the "p-code" generated by some Pascal compilers, as well as the byte codes used by .NET, Java, Basic and some C compilers to be token-threading.
Huffman threaded code consists of lists of Huffman codes. A Huffman code is a variable length bit string used to identify a unique item. A Huffman-threaded interpreter locates subroutines using an index table or tree of pointers that can be navigated by the Huffman code. Huffman threaded code is one of the most compact representations known for a computer program. Basically the index and codes are organized by measuring the frequency that each subroutine occurs in the code. Frequent calls are given the shortest codes. Operations with approximately equal frequencies are given codes with nearly equal bit-lengths. Most Huffman-threaded systems have been implemented as direct-threaded Forth systems, and used to pack large amounts of slow-running code into small, cheap microcontrollers. Most published uses have been in toys, calculators or watches.
Lesser used threading
String threading, where operations are identified by strings, usually looked-up by a hash table. This was used in Charles H. Moore's earliest Forth implementations and in the University of Illinois's experimental hardware-interpreted computer language. It is also used in Bashforth.
Call Threaded Code uses a list of addresses that refers directly to machine language primitives, just like Direct Threaded code. However, the interpreter, instead of jumping to the primitive, calls it with a microprocessor subroutine call instruction instead. This system is slower than direct threading, and may even be slower than indirect threading on some systems. However, it is convenient because it allows for implementations written entirely in high-level languages, like C or Oberon. For example, here is an implementation written in both C and Oberon (untested):
/* In Oberon */
CONST MaxStackSize* = 256;
TYPE ThreadState* = POINTER TO ThreadStateDesc;
Word* = PROCEDURE(current: ThreadState );
StackSpace* = ARRAY MaxStackSize OF INTEGER;
ThreadStateDesc* = RECORD
program: ARRAY OF Word;
VAR stillInterpreting*: BOOLEAN;
PROCEDURE Interpret*(current: ThreadState );
VAR w: Word;
stillInterpreting := TRUE;
WHILE stillInterpreting DO
w := current.program[current.instructionPointer ];
Note how in both cases no direct assembly language is required.
Separating the data and return stacks in a machine eliminates a great deal of stack management code, substantially reducing the size of the threaded code. The dual-stack principle was originated three times independently: for Burroughs large systems, Forth and PostScript, and is used in some Java virtual machines.
Three registers are often present in a threaded virtual machine. Another one exists for passing data between subroutines ('words'). These are: