Geometric transformations are applied to the vertices of polygons, or other geometric objects used as modelling primitives, as part of the first stage in a classical geometry-based graphic image rendering pipeline. Geometric computations may also be applied to transform polygon or patch surface normals, and then to perform the lighting and shading computations used in their subsequent rendering.
Hardware implementations of the geometry pipeline were introduced in the early Evans and Sutherland Picture System, but perhaps received broader recognition when later applied in the broad range of graphics systems products introduced by Silicon Graphics. Initially the SGI geometry hardware performed simple model-space to screen space viewing transformations with all the lighting and shading handled by a separate hardware implementation stage, but in later, much higher performance applications such as the SGI RealityEngine, they began to be applied to perform part of the rendering support as well.
More recently, perhaps dating from the late 1990s, the hardware support required to perform the manipulation and rendering of quite complex scenes has become accessible to the consumer market. Companies such as nVidia and ATI (now a part of AMD) are two current leading representatives of hardware vendors in this space. The Geforce line of graphics cards from nVidia were the first to implement geometry processing in the consumer market, while earlier graphics accelerators by 3Dfx and others of course supported it.
This subject matter is part of the technical foundation for modern computer graphics, and is a comprehensive topic taught at both the undergraduate and graduate levels as part of a Computer Science education.