See N. Chomsky and M. Halle, The Sound Pattern of English (1968); M. Kenstowicz and C. Kisseberth, Generative Phonology (1979); P. Hawkins, Introducing Phonology (1984).
An important part of traditional forms of phonology have been studying which sounds can be grouped into distinctive units within a language. For example, the [p] sound in "pot" is described as including the articulatory feature of aspirated, while the word- and syllable-final [p] in "soup" is not aspirated (indeed, it might be realized as a glottal stop). However, English speakers intuitively treat both sounds as variations of the same phonological category; that is, the English phoneme /p/. Traditionally, it might typically be argued that if a word-initial, aspirated [p] were interchanged with the word-final, unaspirated [p] in "soup", they would still perceive the 'same' /p/ (though speech perception findings now put this theory in doubt). Still, some sort of 'sameness' holds in English but not universally in all other languages. For example, in Thai and Quechua, this difference of aspiration or non-aspiration differentiates phonemes and coincides with lexical contrasts dependent on minimal differences. In addition to the minimal meaningful sounds (the phonemes), phonology studies how sounds alternate, such as the /p/ in English, and topics such as syllable structure, stress, accent, and intonation.
The principles of phonological theory have also been applied to the analysis of sign languages, even though the sub-lexical units are not instantiated as speech sounds. The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. On the other hand, it must be noted, it is difficult to analyze phonologically a language one does not speak, and most phonological analysis takes place with recourse to phonetic information.
The writing systems of some languages are based on the phonemic principle of having one letter (or combination of letters) per phoneme and vice-versa. Ideally, speakers can correctly write whatever they can say, and can correctly read anything that is written. (In practice, this ideal is never realized.) However in English, different spellings can be used for the same phoneme (e.g., rude and food have the same vowel sounds), and the same letter (or combination of letters) can represent different phonemes (e.g., the "th" consonant sounds of "thin" and "this" are different). In order to avoid this confusion based on orthography, phonologists represent phonemes by writing them between two slashes: " / / " (but without the quotes). On the other hand, reference to variations of phonemes or attempts at representing actual speech sounds are usually enclosed by square brackets: "  " (again, without quotes). While the letters between slashes may be based on spelling conventions, the letters between square brackets are usually the International Phonetic Alphabet (IPA) or some other phonetic transcription system. Additionally, the symbols "< >" can be used to isolate the graphemes of an alphabetic writing system.
Part of the phonological study of a language involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. Even though a language may make distinctions between a small number of phonemes, speakers actually produce many more phonetic sounds. Thus, a phoneme in a particular language can be instantiated in many ways.
Traditionally, looking for minimal pairs forms part of the research in studying the phoneme inventory of a language. A minimal pair is a pair of words from the same language, that differ by only a single categorical sound, and that are recognized by speakers as being two different words. When there is a minimal pair, the two sounds are said to be examples of realizations of distinct phonemes. However, since it is often impossible to detect or agree to the existence of all the possible phonemes of a language with this method, other approaches are used as well.
The /t/ sounds in the words 'tub', 'stub', 'but', 'butter', and 'button' are all pronounced differently in American English, yet are all intuited to be of "the same sound", therefore they constitute another example of allophones of the same phoneme in English. However, an intuition such as this could be interpreted as a function of post-lexical recognition of the sounds. That is, all are seen as examples of E /t/ once the word itself has been recognized.
In English, for example, /p/ and /b/ are distinctive units of sound, (i.e., they are phonemes / the difference is phonemic, or phonematic). This can be seen from minimal pairs such as "pin" and "bin", which mean different things, but differ only in one sound. On the other hand, /p/ is often pronounced differently depending on its position relative to other sounds. For example, the /p/ in "pin" is aspirated while the same phoneme in "spin" is not. Yet these different pronunciations are still considered by linguists invoking the intuitions of native speakers to be the same "sound".
The findings and insights of speech perception and articulation research complicates this idea of interchangeable allophones being perceived as the same phoneme, no matter how attractive it might be for linguists who wish to rely on the intuitions of native speakers. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to think that one can splice words into simple segments without affecting speech perception. In other words, interchanging allophones is a nice idea for intuitive linguistics, but it turns out that this idea can not transcend what co-articulation actually does to spoken sounds. Yet human speech perception is so robust and versatile (happening under various conditions) because, in part, it can deal with such co-articulation.
There are different methods for determining why allophones should fall categorically under a specified phoneme. Counter-intuitively, the principle of phonetic similarity is not always used. This tends to make the phoneme seem abstracted away from the phonetic realities of speech. It should be remembered that, just because allophones can be grouped under phonemes for the purpose of linguistic analysis, this does not necessarily mean that this is an actual process in the way the human brain processes a language. On the other hand, it could be pointed out that some sort of analytic notion of a language beneath the word level is usual if the language is written alphabetically. So one could also speak of a phonology of reading and writing.
The Polish scholar Jan Baudouin de Courtenay, (together with his former student Mikołaj Kruszewski) coined the word phoneme in 1876, and his work, though often unacknowledged, is considered to be the starting point of modern phonology. He worked not only on the theory of the phoneme but also on phonetic alternations (i.e., what is now called allophony and morphophonology). His influence on Ferdinand de Saussure was also significant.
Prince Nikolai Trubetzkoy's posthumously published work, the Principles of Phonology (1939), is considered the foundation of the Prague School of phonology. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, though morphophonology was first recognized by Baudouin de Courtenay. Trubetzkoy split phonology into phonemics and archiphonemics; the former has had more influence than the latter. Another important figure in the Prague School was Roman Jakobson, who was one of the most prominent linguists of the twentieth century.
In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for Generative Phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or -. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the Generativists folded morphophonology into phonology, which both solved and created problems.
Natural Phonology was a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes which interact with one another; which ones are active and which are suppressed are language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second-most prominent Natural Phonologist is Stampe's wife, Patricia Donegan; there are many Natural Phonologists in Europe, though also a few others in the U.S., such as Geoffrey Pullum. The principles of Natural Phonology were extended to morphology by Wolfgang U. Dressler, who founded Natural Morphology.
In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Augosegmental phonology later evolved into Feature Geometry, which became the standard theory of representation for the theories of the organization of phonology as different as Lexical Phonology and Optimality Theory.
Government Phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, John Harris, and many others.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed Optimality Theory — an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints which is ordered by importance: a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become the dominant trend in phonology. Though this usually goes unacknowledged, Optimality Theory was strongly influenced by Natural Phonology; both view phonology in terms of constraints on speakers and their production, though these constraints are formalized in very different ways.