One version of natural deductive logic has no axioms. System L, developed by E.J. Lemmon, has only nine primitive rules that govern the syntax of a proof.
The nine primitive rules of system L are
In system L, a proof has a definition with the following conditions:
If no premise is given, the sequent is called theorem. Therefore, the definition of a theorem in system L is:
An example of the proof of a sequent (Modus Tollendo Tollens in this case):
p → q, ¬q ⊢ ¬p [Modus Tollendo Tollens (MTT)]  
Assumption number  Line number  Formula (wff)  Lines inuse and Justification 

1  (1)  (p → q)  A 
2  (2)  ¬q  A 
3  (3)  p  A (for RAA) 
1,3  (4)  q  1,3,MPP 
1,2,3  (5)  q ∧ ¬q  2,4,∧I 
1,2  (6)  ¬p  3,5,RAA 
Q.E.D  
An example of the proof of a sequent (a theorem in this case):
⊢p ∨ ¬p  
Assumption number  Line number  Formula (wff)  Lines inuse and Justification 

1  (1)  ¬(p ∨ ¬p)  A (for RAA) 
2  (2)  ¬p  A (for RAA) 
2  (3)  (p ∨ ¬p)  2, ∨I 
1, 2  (4)  (p ∨ ¬p) ∧ ¬(p ∨ ¬p)  1, 2, ∧I 
1  (5)  ¬¬p  2, 4, RAA 
1  (6)  p  5, DN 
1  (7)  (p ∨ ¬p)  6, ∨I 
1  (8)  (p ∨ ¬p) ∧ ¬(p ∨ ¬p)  1, 7, ∧I 
(9)  ¬¬(p ∨ ¬p)  1, 8, RAA  
(10)  (p ∨ ¬p)  9, DN  
Q.E.D  
Each rule of system L has its own requirements for the type of input(s) or entry(es) that it can accept and has its own way of treating and calculating the assumptions used by its inputs.
Gentzen was motivated by a desire to establish the consistency of number theory, and he found immediate use for his natural deduction calculus. He was nevertheless dissatisfied with the complexity of his proofs, and in 1938 gave a new consistency proof using his sequent calculus. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a prooftheoretical study was to become the definitive work on natural deduction, and included applications for modal and secondorder logic.
The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to MartinLöf's description of logical judgments and connectives (MartinLöf, 1996).
More formally, a finite sequence β_{1} ,..., β_{n} of propositions is said to be a deduction of α from a collection of premises Σ if
Different versions of axiomatic propositional logics contain a few axioms, usually three or more, in addition to one or more inference rules. For instance, Gottlob Frege's axiomatization of propositional logic, which is also the first instance of such an attempt, has six propositional axioms and two rules. Bertrand Russell and Alfred North Whitehead also suggested a system with five axioms.
For instance a version of axiomatic propositional logic due to Jan Łukasiewicz (18781956) has a set A of axioms adopted as follows:
The inference rule(s) allows people to derive the statements following the axioms or given wffs of the ensemble Σ.
The most important judgments in logic are of the form "A is true". The letter A stands for any expression representing a proposition; the truth judgments thus require a more primitive judgment: "A is a proposition". Many other judgments have been studied; for example, "A is false" (see classical logic), "A is true at time t" (see temporal logic), "A is necessarily true" or "A is possibly true" (see modal logic), "the program M has type τ" (see programming languages and type theory), "A is achievable from the available resources" (see linear logic), and many others. To start with, we shall concern ourselves with the simplest two judgments "A is a proposition" and "A is true", abbreviated as "A prop" and "A true" respectively.
The judgment "A prop" defines the structure of valid proofs of A, which in turn defines the structure of propositions. For this reason, the inference rules for this judgment are sometimes known as formation rules. To illustrate, if we have two propositions A and B (that is, the judgments "A prop" and "B prop" are evident), then we form the compound proposition A and B, written symbolically as "$A\; wedge\; B$". We can write this in the form of an inference rule:
Dual to introduction rules are elimination rules to describe how to deconstruct information about a compound proposition into information about its constituents. Thus, from "A ∧ B true", we can conclude "A true" and "B true":
qquadcfrac{A wedge Bhbox{ true}}{Ahbox{ true}} wedge_{E1}} {B wedge Ahbox{ true}} wedge_I
The inference figures we have seen so far are not sufficient to state the rules of implication introduction or disjunction elimination; for these, we need a more general notion of hypothetical derivation.
The notion of hypothetical judgement is internalised as the connective of implication. The introduction and elimination rules are as follows.
vdots
B trueend{matrix} }{A supset B true} supset I^u qquad cfrac{A supset B true quad A true}{B true} supset E
In the introduction rule, the antecedent named u is discharged in the conclusion. This is a mechanism for delimiting the scope of the hypothesis: its sole reason for existence is to establish "B true"; it cannot be used for any other purpose, and in particular, it cannot be used below the introduction. As an example, consider the derivation of "A ⊃ (B ⊃ (A ∧ B)) true":
A supset left (B supset left (A wedge B right ) right ) true
} supset I^u
} supset I^w
With hypothetical derivations, we can now write the elimination rule for disjunction:
quadbegin{matrix} cfrac{}{A true} u
vdots
C trueend{matrix}
quadbegin{matrix} cfrac{}{B true} w
vdots
C trueend{matrix} }{C true} vee E^{u,w}
Negation is similar to implication.
vdots
p trueend{matrix} }{lnot A true} lnot I^{u,p} qquad cfrac{lnot A true quad A true}{C true} lnot E
 u w A true B true  ∧I A ∧ B true  ∧E_{1} A true  ⇒ 
 u A true 
Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions:
 u A ∧ B true  ⇒ 
 u  u A ∧ B true A ∧ B true  ∧E_{1}  ∧E_{2} A true B true  ∧I A ∧ B true 
These notions correspond exactly to βreduction and ηexpansion in the lambda calculus, using the CurryHoward isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961; see his book Natural deduction: a prooftheoretical study, A&W Stockholm 1965, no ISBN. It is much easier to show this indirectly by means of a cutfree sequent calculus presentation.
The logic of the earlier section is an example of a singlesorted logic, i.e., a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms. More precisely, we will add a new kind of judgement, "t is a term" (or "t term") where t is schematic. We shall fix a countable set V of variables, another countable set F of function symbols, and construct terms as follows:
v ∈ V  varF v term 
f ∈ F t_{1} term t_{2} term ... t_{n} term  appF f (t_{1}, t_{2}, ..., t_{n}) term 
φ ∈ P t_{1} term t_{2} term ... t_{n} term  predF φ (t_{1}, t_{2}, ..., t_{n}) prop 
 u x term ⋮ A prop  ∀F^{u} ∀x. A prop 
 u x term ⋮ A prop  ∃F^{u} ∃x. A prop 
 u a term ⋮ [a/x] A true  ∀I^{u, a} ∀x. A true 
∀x. A true t term  ∀E [t/x] A true  
[t/x] A true  ∃I ∃x. A true 
 u  v a term [a/x] A true ⋮ ∃x. A true C true  ∃E^{a, u,v} C true 
So far the quantified extensions are firstorder: they distinguish propositions from the kinds of objects quantified over. Higherorder logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules:
 u p prop ⋮ A prop  ∀F^{u} ∀p. A prop 
 u p prop ⋮ A prop  ∃F^{u} ∃p. A prop 
 u_{1}  u_{2} ...  u_{n} J_{1} J_{2} J_{n} ⋮ J  ⇒ 
u_{1}:J_{1}, u_{2}:J_{2}, ..., u_{n}:J_{n} ⊢ J 
u ∈ V  proofF u proof 
 hyp u:A true ⊢ u : A true 
π_{1} proof π_{2} proof  pairF (π_{1}, π_{2}) proof 
Γ ⊢ π_{1} : A Γ ⊢ π_{2} : B  ∧I Γ ⊢ (π_{1}, π_{2}) : A ∧ B 
π proof  fstF
fst π proof

Γ ⊢ π : A ∧ B  ∧E_{1} Γ ⊢ fst π : A
 
π proof  sndF
snd π proof

Γ ⊢ π : A ∧ B  ∧E_{2} Γ ⊢ snd π : B

π proof  λF λu. π proof 
Γ, u:A ⊢ π : B  ⊃I Γ ⊢ λu. π : A ⊃ B  
π_{1} proof π_{2} proof  appF π_{1} π_{2} proof 
Γ ⊢ π_{1} : A ⊃ B Γ ⊢ π_{2} : A  ⊃E Γ ⊢ π_{1} π_{2} : B 
So far the judgement "Γ ⊢ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is "the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections.
The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most nontrivial type theories, which is a big departure from the logical world. (Recall that every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥ true". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.
Like logic, type theory has many extensions and variants, including firstorder and higherorder versions. An interesting branch of type theory, known as dependent type theory, allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules:
Γ ⊢ A type Γ, x:A ⊢ B type  ΠF Γ ⊢ Πx:A. B type 
Γ ⊢ A type Γ, x:A ⊢ B type  ΣF Γ ⊢ Σx:A. B type 
Γ, x:A ⊢ π : B  ΠI Γ ⊢ λx. π : Πx:A. B 
Γ ⊢ π_{1} : Πx:A. B Γ ⊢ π_{2} : A  ΠE Γ ⊢ π_{1} π_{2} : [π_{2}/x] B 
Γ ⊢ π_{1} : A Γ, x:A ⊢ π_{2} : B  ΣI Γ ⊢ (π_{1}, π_{2}) : Σx:A. B 
Γ ⊢ π : Σx:A. B  ΣE_{1} Γ ⊢ fst π : A

Γ ⊢ π : Σx:A. B  ΣE_{2} Γ ⊢ snd π : [fst π/x] B 
Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more wellbehaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the typetheoretic analogue of higherorder logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt.
The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higherorder dependent type theory, with various tradeoffs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.
This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting:
 XM_{1} A ∨ ¬A true 
¬¬A true  XM_{2} A true 
 u
¬A true ⋮ p true
 XM_{3}^{u, p} A true 
(XM_{3} is merely XM_{2} expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.
A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truthcentric judgement A true with a more classical notion: in localised form, instead of Γ ⊢ A, he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.)
Another important extension was for modal and other logics that need more than just the basic judgement of truth. These were first described in a natural deduction style by Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic of necessity requires one new judgement, "A valid", that is categorical with respect to truth:
This categorical judgement is internalised as a unary connective ◻A (read "necessarily A") with the following introduction and elimination rules:
A valid  ◻I ◻ A true 
◻ A true  ◻E A true 
Note that the premiss "A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A true" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgement "A true"; validity is not needed here since "Ω ⊢ A valid" is by definition the same as "Ω;⋅ ⊢ A true". The introduction and elimination forms are then:
Ω;⋅ ⊢ π : A true  ◻I Ω;⋅ ⊢ box π : ◻ A true

Ω;Γ ⊢ π : ◻ A true  ◻E Ω;Γ ⊢ unbox π : A true

The modal hypotheses have their own version of the hypothesis rule and substitution theorem.
 validhyp Ω, u: (A valid) ; Γ ⊢ u : A true 
This framework of separating judgements into distinct collections of hypotheses, also known as multizoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics, to give a few examples.
The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic. In natural deduction the flow of information is bidirectional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottomup or topdown reading, making it unsuitable for automation in proof search, or even for proof checking (or typechecking in type theory). To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic. Kleene, in his seminal 1952 book Introduction to Metamathematics (ISBN 0720421039), gave the first formulation of the sequent calculus in the modern style.
In the sequent calculus all inference rules have a purely bottomup reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar:
Γ ⇒ A  ∨R_{1} Γ ⇒ A ∨ B 
Γ ⇒ B  ∨R_{2} Γ ⇒ A ∨ B 
Γ, u:A ⇒ C Γ, v:B ⇒ C  ∨L Γ, w: (A ∨ B) ⇒ C 
Γ ⊢ A ∨ B Γ, u:A ⊢ C Γ, v:B ⊢ C  ∨E Γ ⊢ C 
natural deduction  sequent calculus  

 hyp  elim. rules  ↓  ↑↓ meet ↑  intro. rules  conclusion  ⇒ 
 init ↑ ↑   left rules  right rules   conclusion 
 init Γ, u:A ⇒ A 
It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule
sequent calculus  natural deduction  

Γ ⇒ π_{1} : A Γ ⇒ π_{2} : B  ∧R Γ ⇒ (π_{1}, π_{2}) : A ∧ B 
Γ ⊢ π_{1} : A Γ ⊢ π_{2} : B  ∧I Γ ⊢ (π_{1}, π_{2}) : A ∧ B 
sequent calculus  natural deduction  

Γ, v: (A ∧ B), u:A ⇒ π : C  ∧L_{1} Γ, v: (A ∧ B) ⇒ [fst v/u] π : C

Γ ⊢ π : A ∧ B  ∧E_{1} Γ ⊢ fst π : A
 
Γ, v: (A ∧ B), u:B ⇒ π : C  ∧L_{2} Γ, v: (A ∧ B) ⇒ [snd v/u] π : C

Γ ⊢ π : A ∧ B  ∧E_{2} Γ ⊢ snd π : B

The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the βnormal ηlong form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes [3]), which can be used to formally define the notion of a normal form for natural deduction.
The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus. Cut (substitution) : If Γ ⇒ π_{1} : A and Γ, u:A ⇒ π_{2} : C, then Γ ⇒ [π_{1}/u] π_{2} : C.
In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a metatheorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of subpropositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥ true" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cutfree sequent calculus formulations because of such properties.