Different species are able to see different parts of the light spectrum; for example, bees can see into the ultraviolet, while pit vipers can accurately target prey with their infrared imaging sensors.
In the second half of the 19th century, many motifs of the nervous system were identified such as the neuron doctrine and brain localisation, which related to the neuron being the basic unit of the nervous system and functional localisation in the brain, respectively. These would become tenets of the fledgling neuroscience and would support further understanding of the visual system.
The notion that the cerebral cortex is divided into functionally distinct cortices now known to be responsible for capacities such as touch (somatosensory cortex), movement (motor cortex), and vision (visual cortex), was first proposed by Franz Joseph Gall in 1810. Evidence for functionally distinct areas of the brain (and, specifically, of the cerebral cortex) mounted throughout the 19th century with discoveries by Paul Broca of the language centre (1861), and Gustav Fritsch and Edouard Hitzig of the motor cortex (1871). Based on selective damage to parts of the brain and the functional effects this would produce (lesion studies), David Ferrier proposed that visual function was localised to the parietal lobe of the brain in 1876. In 1881, Hermann Munk more accurately located vision in the occipital lobe, where the primary visual cortex is now known to be.
The eye is a complex biological device. The functioning of a camera is often compared with the workings of the eye, mostly since both focus light from external objects in the visual field onto a light-sensitive medium. In the case of the camera, this medium is film or an electronic sensor; in the case of the eye, it is an array of visual receptors. With this simple geometrical similarity, based on the laws of optics, the eye functions as a transducer, as does a CCD camera.
Light entering the eye is refracted as it passes through the cornea. It then passes through the pupil (controlled by the iris) and is further refracted by the lens. The cornea and lens act together as a compound lens to project an inverted image onto the retina.
Rods and cones differ in function. Rods are found primarily in the periphery of the retina and are used to see at low levels of light. Cones are found primarily in the center (or fovea) of the retina. There are three types of cones that differ in the wavelengths of light they absorb; they are usually called short or blue, middle or green, and long or red. Cones are used primarily to distinguish color and other features of the visual world at normal levels of light.
In the retina, the photoreceptors synapse directly onto bipolar cells, which in turn synapse onto ganglion cells of the outermost layer, which will then conduct action potentials to the brain. A significant amount of visual processing arises from the patterns of communication between neurons in the retina. About 130 million photoreceptors absorb light, yet roughly 1.2 million axons of ganglion cells transmit information from the retina to the brain. The processing in the retina includes the formation of center-surround receptive fields of bipolar and ganglion cells in the retina, as well as convergence and divergence from photoreceptor to bipolar cell. In addition, other neurons in the retina, particularly horizontal and amacrine cells, transmit information laterally (from a neuron in one layer to an adjacent neuron in the same layer), resulting in more complex receptive fields that can be either indifferent to color and sensitive to motion or sensitive to color and indifferent to motion.
The final result of all this processing is five different populations of ganglion cells that send visual (image-forming and non-image-forming) information to the brain:
In 2007 Zaidi and co-researchers on both sides of the Atlantic studying patients without rods and cones, discovered that the novel photoreceptive ganglion cell in humans also has a role in conscious and unconscious visual perception. The peak spectral sensitivity was 481nm. This shows that there are two pathways for sight in the retina - one based on classic photoreceptors (rods and cones) and the other, newly discovered, based on photoreceptive ganglion cells which act as rudimentary visual brightness detectors.
The information about the image via the eye is transmitted to the brain along the optic nerve. Different populations of ganglion cells in the retina send information to the brain through the optic nerve. About 90% of the axons in the optic nerve go to the lateral geniculate nucleus in the thalamus. These axons originate from the M, P, and K ganglion cells in the retina, see above. This parallel processing is important for reconstructing the visual world; each type of information will go through a different route to perception. Another population sends information to the superior colliculus in the midbrain, which assists in controlling eye movements (saccades).
A final population of photosensitive ganglion cells, containing melanopsin, sends information via the retinohypothalamic tract (RHT) to the pretectum (pupillary reflex), to several structures involved in the control of circadian rhythms and sleep such as the suprachiasmatic nucleus (SCN, the biological clock), and to the ventrolateral preoptic nucleus (VLPO, a region involved in sleep regulation). A recently discovered role for photoreceptive ganglion cells is that they mediate conscious and unconscious vision - acting as rudimentary visual brightness detectors shown in rodless coneless eyes.
The lateral geniculate nucleus (LGN) is a sensory relay nucleus in the thalamus of the brain. The LGN consists of six layers in humans and other primates starting from catarhinians, including cercopithecidae and apes. Layers 1, 4, and 6 correspond to information from the contralateral (crossed) fibers of the nasal visual field; layers 2, 3, and 5 correspond to information from the ipsilateral (uncrossed) fibers of the temporal visual field. Layer one (1) contains M cells, which correspond to the M (magnocellular) cells of the optic nerve of the opposite eye, and are concerned with depth or motion. Layers four and six (4 & 6) of the LGN also connect to the opposite eye, but to the P cells (color and edges) of the optic nerve. By contrast, layers two, three and five (2, 3, & 5) of the LGN connect to the M cells and P (parvocellular) cells of the optic nerve for the same side of the brain as its respective LGN. The six layers of the LGN are the area of a credit card, but about three times the thickness of a credit card, rolled up into two ellipsoids about the size and shape of two small birds eggs. In between the six layers are smaller cells that receive information from the K cells (color) in the retina. The neurons of the LGN then relay the visual image to the primary visual cortex (V1) which is located at the back of the brain (caudal end) in the occipital lobe in and close to the calcarine sulcus.
The optic radiations carries information from the thalamic lateral geniculate nucleus to layer 4 of the visual cortex. The P layer neurons of the LGN relay to V1 layer 4C β. The M layer neurons relay to V1 layer 4C α. The K layer neurons in the LGN relay to large neurons called blobs in layers 2 and 3 of V1.
There is a direct correspondence from an angular position in the field of view of the eye, all the way through the optic tract to a nerve position in V1. At this juncture in V1, the image path ceases to be straightforward; there is more cross-connection within the visual cortex.
The visual cortex is the most massive system in the human brain and is responsible for higher-level processing of the visual image. It lies at the rear of the brain (highlighted in the image), above the cerebellum. The interconnections between layers of the cortex, the thalamus, the cerebellum, the hippocampus and the remainder of the areas of the brain are under active investigation. Currently, much of what is known stems from patients with damage to known areas of the brain, with a corresponding study of the cognitive functions which have been spared. See visual modularity for a discussion of the modular thesis of visual perception.