Definitions

Inference

induction, problem of

Problem of justifying the inductive inference from the observed to the unobserved. It was given its classic formulation by David Hume, who noted that such inferences typically rely on the assumption that the future will resemble the past, or on the assumption that events of a certain type are necessarily connected, via a relation of causation, to events of another type. (1) If we were asked why we believe that the sun will rise tomorrow, we would say that in the past the Earth turned on its axis every 24 hours (more or less), and that there is a uniformity in nature that guarantees that such events always happen in the same way. But how do we know that nature is uniform in this sense? We might answer that, in the past, nature has always exhibited this kind of uniformity, and so it will continue to be uniform in the future. But this inference is justified only if we assume that the future must resemble the past. How do we justify this assumption? We might say that in the past, the future turned out to resemble the past, and so in the future, the future will again turn out to resemble the past. The inference is obviously circular: it succeeds only by tacitly assuming what it sets out to prove, namely that the future will resemble the past. (2) If we are asked why we believe we will feel heat when we approach a fire, we would say that fire causes heat—i.e., there is a “necessary connection” between fire and heat, such that whenever one occurs, the other must follow. But, Hume asks, what is this “necessary connection”? Do we observe it when we see the fire or feel the heat? If not, what evidence do we have that it exists? All we have is our observation, in the past, of a “constant conjunction” of instances of fire being followed by instances of heat. This observation does not show that, in the future, instances of fire will continue to be followed by instances of heat; to say that it does is to assume that the future must resemble the past. But if our observation is consistent with the possibility that fire may not be followed by heat in the future, then it cannot show that there is a necessary connection between the two that makes heat follow fire whenever fire occurs. Thus we are not justified in believing that (1) the sun will rise tomorrow or that (2) we will feel heat when we approach a fire. It is important to note that Hume did not deny that he or anyone else formed beliefs about the future on the basis of induction; he denied only that we could know with certainty that these beliefs are true. Philosophers have responded to the problem of induction in a variety of ways, though none has gained wide acceptance.

Learn more about induction, problem of with a free trial on Britannica.com.

Method of raising the temperature of an electrically conductive material by subjecting it to an alternating electromagnetic field. Energy in the electric currents induced in the object is dissipated as heat. Induction heating is used in metalworking to heat metals for soldering, tempering, and annealing, and in induction furnaces for melting and processing metals. The principle of the induction-heating process resembles that of the transformer. A water-cooled coil (inductor), acting as the primary winding of a transformer, surrounds the material to be heated (the workpiece), which acts as the secondary winding. Alternating current flowing in the primary coil induces eddy currents in the workpiece, causing it to become heated. The depth to which the eddy currents penetrate, and therefore the distribution of heat within the object, depend on the frequency of the primary alternating current and the magnetic permeability, as well as the resistivity, of the material.

Learn more about induction heating with a free trial on Britannica.com.

In logic, a type of nonvalid inference or argument in which the premises provide some reason for believing that the conclusion is true. Typical forms of inductive argument include reasoning from a part to a whole, from the particular to the general, and from a sample to an entire population. Induction is traditionally contrasted with deduction. Many of the problems of inductive logic, including what is known as the problem of induction, have been treated in studies of the methodology of the natural sciences. Seealso John Stuart Mill; philosophy of science; scientific method.

Learn more about induction with a free trial on Britannica.com.

Modification in the distribution of electric charge on one material under the influence of an electric charge on a nearby object. It occurs whenever any object is placed in an electric field. When a negatively charged object is brought near a neutral object, it induces a positive charge on the near side of the object and a negative charge on the far side. If the negative side of the original object is momentarily grounded, the negative charge may escape, so that the object becomes positively charged by induction.

Learn more about electrostatic induction with a free trial on Britannica.com.

In logic, a type of inference or argument that purports to be valid, where a valid argument is one whose conclusion must be true if its premises are true (see validity). Deduction is thus distinguished from induction, where there is no such presumption. Valid deductive arguments may have false premises, as demonstrated by the example: “All men are mortal; Cleopatra is a man; therefore, Cleopatra is mortal.” Invalid deductive arguments sometimes embody formal fallacies (i.e., errors of reasoning based on the structure of the propositions in the argument); an example is “affirming the consequent”: “If A then B; B; therefore, A” (see fallacy; formal and informal).

Learn more about deduction with a free trial on Britannica.com.

Inference is the act or process of deriving a conclusion based solely on what one already knows.

Inference is studied within several different fields.

The accuracy of inductive and deductive inferences

The process by which a conclusion is inferred from multiple observations is called inductive reasoning. The conclusion may be correct or incorrect, or partially correct, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations.

Inductive inference is the method of science. A theory is proposed based on multiple observations, usually observations carried out with great care, using measurement. The theory is then tested many times, by independent investigators, using their own multiple observations. If the theory proves correct to within the accuracy of those observations, then it is provisionally accepted. Any scientific theory is subject to additional testing, and may be modified or overthrown based on additional evidence. For example, the Germ Theory of Disease required modification when viruses and also deficiency diseases were discovered.

Scientific theories arrived at by inductive inference have proved stable enough over long periods of time to revolutionize the way human beings live. A scientific theory can be overthrown only by carefully recorded repeated observations, carried out independently by a large number of investigators. They cannot be overthrown by revelation, authority, ignorance, or doubt.

The process by which a conclusion is logically inferred from certain premises is called deductive reasoning. Deductive inference is the method of mathematics. Certain definitions and axioms are taken as a starting point, and from these certain theorems are deduced using pure reasoning. The idea for a theorem may have many sources: analogy, pattern recognition, and experiment are examples of where the inspiration for a theorem comes from. However, a conjecture is not granted the status of theorem until it has a deductive proof. This method of inference is even more accurate than the scientific method. Mistakes are usually quickly detected by other mathematicians and corrected. The proofs of Euclid, for example, have mistakes in them that have been caught and corrected, but the theorems of Euclid, all of them without exception, have stood the test of time for more than two thousand years.

From a pragmatic viewpoint, the inferences arrived at by the methods of science and mathematics have proved much more successful than the inferences arrived at by any other method. This has given rise to the popular saying, when a conclusion is challenged, "Do the math."

Valid inferences

Inferences are either valid or invalid, but not both. Philosophical logic has attempted to define the rules of proper inference, i.e. the formal rules that, when correctly applied to true premises, lead to true conclusions. Aristotle has given one of the most famous statements of those rules in his Organon. Modern mathematical logic, beginning in the 19th century, has built numerous formal systems that embody Aristotelian logic (or variants thereof).

Examples of deductive inference

Greek philosophers defined a number of syllogisms, correct three-part inferences, that can be used as building blocks for more complex reasoning. We'll begin with the most famous of them all:

All men are mortal
Socrates is a man
------------------
Therefore Socrates is mortal.

The reader can check that the premises and conclusion are true. The Latin name for this form was modus ponens.

The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if the parts are true. But a valid form with true premises will always have a true conclusion.

For example, consider the form of Modus Ponens:

All A are B
C is A
----------
Therefore C is B

The form remains valid even if all three parts are false:

All apples are blue.
A banana is an apple.
----
Therefore bananas are blue.

For the conclusion to be necessarily true, the premises need to be true.

Now we turn to an invalid form.

All A are B.
C is a B.
----
Therefore C is an A.

To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.

All apples are fruit.            (true)
Bananas are fruit.               (true)
----
Therefore bananas are apples.    (false)

A valid argument with false premises may lead to a false conclusion:

All fat people are Greek
John Lennon was fat
-------------------
Therefore John Lennon was Greek

where a valid argument is used to derive a false conclusion from false premises. The inference is valid because it follows the form of a correct inference.

A valid argument can also be used to derive a true conclusion from false premises:

All fat people are musicians
John Lennon was fat
-------------------
Therefore John Lennon was a musician

In this case we have two false premises that imply a true conclusion.

Incorrect inference

An incorrect inference is known as a fallacy. Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning.

Automatic logical inference

AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines.

An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.

An example: inference using Prolog

Prolog (for "Programming in Logic") is a programming language based on a subset of predicate calculus. Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called backward chaining.

Let us return to our Socrates syllogism. We enter into our Knowledge Base the following piece of code:

mortal(X) :-  man(X).
man(socrates).
(Here :- can be read as if. Generally, if P Q (if P then Q) then in Prolog we would code Q:-P (Q if P).)
This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates:

?- mortal(socrates).
(where ?- signifies a query: Can mortal(socrates). be deduced from the KB using the rules) gives the answer "Yes".

On the other hand, asking the Prolog system the following:

?- mortal(plato).

gives the answer "No".

This is because Prolog does not know anything about Plato, and hence defaults to any property about Plato being false (the so-called closed world assumption). Finally ?- mortal(X) (Is anything mortal) would result in "Yes" (and in some implemenations: "Yes": X=socrates)
Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.

Automatic inference and the semantic web

Recently automatic reasoners found in semantic web a new field of application. Being based upon first-order logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inference can be made upon it.

Inference and uncertainty

Traditional logic is only concerned with certainty—one progresses from premises to a conclusion, where all the premises and the conclusion are declarative sentences that are either true or false. There are several motivations for extending logic to deal with uncertain "propositions" and weaker modes of reasoning.

  • Philosophical motivations
    • A large part of our everyday reasoning does not follow the strict rules of logic, but is nevertheless effective in many cases.
    • Science itself is not deductive, but largely inductive, and its process cannot be captured by standard logic (see problem of induction).
  • Technical motivations
    • Statisticians and scientists wish to be able to infer parameters or test hypothesis on statistical data in a rigorous, quantified way.
    • Artificial intelligence systems need to reason efficiently about uncertain quantities.

The reason most examples of applying deductive logic, such as the one above, seem artificial is because they are rarely encountered outside fields such as mathematics. Most of our everyday reasoning is of a less "pure" nature.

To take an example: suppose you live in a flat. Late at night, you are awakened by creaking sounds in the ceiling. You infer from these sounds that your neighbour upstairs is having another bout of insomnia and is pacing in his room, sleepless.

Although that reasoning seems sound, it does not fit in the logical framework described above. First, the reasoning is based on uncertain facts: what you heard were creaks, not necessarily footsteps. But even if those facts were certain, the inference is of an inductive nature: perhaps you have often heard your neighbour at night, and the best explanation you have found is that he or she is an insomniac. Hence tonight's footsteps.

It is easy to see that this line of reasoning does not necessarily lead to true conclusions: perhaps your neighbour had a very early plane to catch, which would explain the footsteps just as well. Uncertain reasoning can only find the best explanation among many alternatives.

Bayesian statistics and probability logic

Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following E. T. Jaynes).

Bayesianists identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.

Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory). A central rule of Bayesian inference is Bayes' theorem, which gave its name to the field.

See Bayesian inference for examples.

Nonmonotonic logic

Source: Article of André Fuhrmann about "Nonmonotonic Logic"

A relation of inference is monotonic if the addition of premisses does not undermine previously reached conclusions; otherwise the relation is nonmonotonic. Deductive inference, at least according to the canons of classical logic, is monotonic: if a conclusion is reached on the basis of a certain set of premisses, then that conclusion still holds if more premisses are added.

By contrast, everyday reasoning is mostly nonmonotonic because it involves risk: we jump to conclusions from deductively insufficient premises. We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce’s theory of abduction, inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.

Three types of logical inference

There are three types of inference:

An example

Hooke's law is the rule that gives the elongation of a beam (that's an effect) when a force (that's the cause) is acting on a beam.

  • If the force and Hooke's law are known, the elongation of the beam can be deduced.
  • If the elongation and Hooke's law are known, the force acting on the beam can be abduced.
  • If the elongation and the force are known, Hooke's law can be induced.

See also

  • Portals
    • **

References

  • Ian Hacking. An Introduction to Probability and Inductive Logic. Cambridge University Press, (2000).
  • Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Cambridge University Press, (2003). ISBN 0-521-59271-2.
  • David J.C. McKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, (2003).
  • Henk Tijms. Understanding Probability. Cambridge University Press, (2004).
  • André Fuhrmann: Nonmonotonic Logic

Search another word or see Inferenceon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
  • Please Login or Sign Up to use the Recent Searches feature