The first step toward automation of logic minimization was the introduction of the Quine-McCluskey algorithm that could be implemented on a computer. This exact minimization technique presented the notion of prime implicants and minimum cost covers that would become the cornerstone of two-level minimization. Nowadays, the much more efficient Espresso heuristic logic minimizer has become the standard tool for this operation. Another area of early research was in state minimization and encoding of finite state machines (FSMs), a task that was the bane of designers. The applications for logic synthesis lay primarily in digital computer design. Hence, IBM and Bell Labs played a pivotal role in the early automation of logic synthesis. The evolution from discrete logic components to programmable logic arrays (PLAs) hastened the need for efficient two-level minimization, since minimizing terms in a two-level representation reduces the area in a PLA.
However, two-level logic circuits are of limited importance in a very-large-scale integration (VLSI) design; most designs use multiple levels of logic; As a matter of fact, almost any circuit representation in RTL or Behavioural Description is a multi-level representation. An early system that was used to design multilevel circuits was LSS from IBM. It used local transformations to simplify logic. Work on LSS and the Yorktown Silicon Compiler spurred rapid research progress in logic synthesis in the 1980s. Several universities contributed by making their research available to the public; most notably, SIS from University of California, Berkeley, RASP from University of California, Los Angeles and BOLD from University of Colorado, Boulder. Within a decade, the technology migrated to commercial logic synthesis products offered by electronic design automation companies.
The tasks of scheduling, resource allocation, and sharing generate the FSM and the datapath of the RTL description of the design. Scheduling assigns operations to points in time, while allocation assigns each operation or variable to a hardware resource. Given a schedule, the allocation operation optimizes the amount of hardware required to implement the design.
Next, this network is optimized using several technology-independent techniques before technology-dependent optimizations are performed. The typical cost function during technology-independent optimizations is total literal count of the factored representation of the logic function (which correlates quite well with circuit area).
Finally, technology-dependent optimization transforms the technology-independent circuit into a network of gates in a given technology. The simple cost estimates are replaced by more concrete, implementation-driven estimates during and after technology mapping. Mapping is constrained by factors such as the available gates (logic functions) in the technology library, the drive sizes for each gate, and the delay, power, and area characteristics of each gate.
Where logic synthesis fits. (includes related articles on the basics of synthesis and the difference between logic synthesis and silicon compilation) (EDN News Edition: Design Automation)
Jun 15, 1989; Where logic synthesis fits It seems that every CAE tool company says it is getting into logic synthesis. Such products range from...
Logic synthesis prepares for VHDL. (very-high-speed-integrated-circuit hardware description language) (technical)
Mar 30, 1989; Logic synthesis prepares for VHDL Logic synthesis tools are the latest in a long line of CAE tools that are destined to shrink...
Probing the limits of logic synthesis. (special report) (includes related articles on a user's view of logic synthesis, and future design methods and needs)
Mar 17, 1994; Logic synthesis has freed designers from the complexities of gate-level design by converting RTL descriptions to optimized...