Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methodologies of knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses. These steps must be repeatable in order to dependably predict any future results. Theories that encompass wider domains of inquiry may bind many hypotheses together in a coherent structure. This in turn may help form new hypotheses or place groups of hypotheses into context.
Among other facets shared by the various fields of inquiry is the conviction that the process be objective to reduce a biased interpretation of the results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, thereby allowing other researchers the opportunity to verify results by attempting to reproduce them. This practice, called full disclosure, also allows statistical measures of the reliability of these data to be established.
Since Ibn al-Haytham (Alhazen, 965–1039), one of the key figures in developing scientific method, the emphasis has been on seeking truth:
The conjecture that "Light travels through transparent bodies in straight lines only", was corroborated by Alhazen only after years of effort. His demonstration of the conjecture was to place a straight stick or a taut thread next to the light beam, to prove that light travels in a straight line.
Scientific methodology has been practiced in some form for at least one thousand years. There are difficulties in a formulaic statement of method, however. As William Whewell (1794–1866) noted in his History of Inductive Science (1837) and in Philosophy of Inductive Science (1840), "invention, sagacity, genius" are required at every step in scientific method. It is not enough to base scientific method on experience alone; multiple steps are needed in scientific method, ranging from our experience to our imagination, back and forth.
This model underlies the scientific revolution. One thousand years ago, Alhazen demonstrated the importance of steps 1 and 4. Galileo (1638) also showed the importance of step 4 (also called Experiment) in Two New Sciences. One possible sequence in this model would be 1, 2, 3, 4. If the outcome of 4 holds, and 3 is not yet disproven, you may continue with 3, 4, 1, and so forth; but if the outcome of 4 shows 3 to be false, you will have go back to 2 and try to invent a new 2, deduce a new 3, look for 4, and so forth.
Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2. (This is what Einstein meant when he said "No amount of experimentation can ever prove me right; a single experiment can prove me wrong.")
In the twentieth century, Ludwik Fleck (1896–1961) and others found that we need to consider our experiences more carefully, because our experience may be biased, and that we need to be more exact when describing our experiences. These considerations are discussed below.
There are many ways of outlining the basic method shared by all fields of scientific inquiry. The following examples are typical classifications of the most important components of the method on which there is wide agreement in the scientific community and among philosophers of science. There are, however, disagreements about some aspects.
The following set of methodological elements and organization of procedures tends to be more characteristic of natural sciences than social sciences. In the social sciences mathematical and statistical methods of verification and hypotheses testing may be less stringent. Nonetheless the cycle of hypothesis, verification and formulation of new hypotheses will resemble the cycle described below.
Each element of a scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do (see below) but apply mostly to experimental sciences (e.g., physics, chemistry). The elements above are often taught in the educational system.
Scientific method is not a recipe: it requires intelligence, imagination, and creativity. It is also an ongoing cycle, constantly developing more useful, accurate and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically large, the vanishingly small, and the extremely fast are reduced out from Einstein's theories — all phenomena that Newton could not have observed — Newton's equations remain. Einstein's theories are expansions and refinements of Newton's theories, and observations that increase our confidence in them also increase our confidence in Newton's approximations to them.
A linearized, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:
The iterative cycle inherent in this step-by-step methodology goes from point 3 to 6 back to 3 again.
While this schema outlines a typical hypothesis/testing method, it should also be noted that a number of philosophers, historians and sociologists of science (perhaps most notably Paul Feyerabend) claim that such descriptions of scientific method have little relation to the ways science is actually practiced.
The Keystones of Science project, sponsored by the journal Science, has selected a number of scientific articles from that journal and annotated them, illustrating how different parts of each article embody scientific method. Here is an annotated example of this scientific method example titled Microbial Genes in the Human Genome: Lateral Transfer or Gene Loss?.
Scientific method depends upon increasingly more sophisticated characterizations of the subjects of investigation. (The subjects can also be called Lists of unsolved problems or the unknowns.) For example, Benjamin Franklin correctly characterized St. Elmo's fire as electrical in nature, but it has taken a long series of experiments and theory to establish this. While seeking the pertinent properties of the subjects, this careful thought may also entail some definitions and observations; the observations often demand careful measurements and/or counting.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and a science, such as chemistry or biology. Scientific measurements taken are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and development.
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities that are used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to limitations of the method used. Counts may only represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
Measurements demand the use of operational definitions of relevant quantities. That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or "idealized" definition. For example, electrical current, measured in amperes, may be operationally defined in terms of the mass of silver deposited in a certain time on an electrode in an electrochemical device that is described in some detail. The operational definition of a thing often relies on comparisons with standards: the operational definition of "mass" ultimately relies on the use of an artifact, such as a certain kilogram of platinum-iridium kept in a laboratory in France.
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories sometimes arise upon realizing that certain terms had not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
The characterization element can require extended and extensive study, even centuries. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic and European astronomers, to record the motion of planet Earth. Newton was able to condense these measurements into consequences of his laws of motion. But the perihelion of the planet Mercury's orbit exhibits a precession that is not fully explained by Newton's laws of motion. The observed difference for Mercury's precession between Newtonian theory and relativistic theory (approximately 43 arc-seconds per century), was one of the things that occurred to Einstein as a possible early test of his theory of General Relativity.
A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena.
Normally hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, induction, Bayesian inference, and so on — to imagine possible explanations for a phenomenon under study. Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
In general scientists tend to look for theories that are "elegant" or "beautiful". In contrast to the usual English use of these terms, they here refer to a theory in accordance with the known facts, which is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for making these determinations.
Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and only talk about probabilities.
It is essential that the outcome be currently unknown. Only in this case does the eventuation increase the probability that the hypothesis be true. If the outcome is already known, it's called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet useful for the method, and must wait for others who might come afterward, and perhaps rekindle its line of reasoning. For example, a new technology or theory might make the necessary experiments feasible.
Also in their first paper, Watson and Crick predicted that the double helix structure that they discovered would prove important in biology, writing "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material". ..4. DNA-experiments
Einstein's theory of General Relativity makes several specific predictions about the observable structure of space-time, such as a prediction that light bends in a gravitational field and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.
Once predictions are made, they can be tested by experiments. If test results contradict predictions, then the hypotheses are called into question and explanations may be sought. Sometimes experiments are conducted incorrectly and are at fault. If the results confirm the predictions, then the hypotheses are considered likely to be correct but might still be wrong and are subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples (or observations) under differing conditions, to see what varies or what remains the same. We vary the conditions for each measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane.
Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and providing evidence of the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results. Traces of this tradition can be seen in the work of Hipparchus (190-120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of Muslim scientists such as Geber (721-815 CE), al-Battani (853–929) and Alhacen (965-1039).
Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
To protect against bad science and fraudulent data, government research granting agencies like NSF and science journals like Nature and Science have a policy that researchers must archive their data and methods so other researchers can access it, test the data and methods and build on the research that has gone before. Scientific data archiving can be done at a number of national archives in the U.S. or in the World Data Center.
Peirce held that, in practical matters, slow and stumbling ratiocination is not generally to be automatically preferred over instinct and tradition, and held that scientific method is best suited to theoretical inquiry. What recommends the specifically scientific method of inquiry above all others is the fact that it is deliberately designed to arrive, eventually, at the ultimately most secure beliefs, upon which the most successful actions can eventually be based. In 1877, he outlined four methods for the fixation of belief, the settlement of doubt, graded by their success in achieving a sound settlement of belief:
Peirce characterized scientific method in terms of the uses of inference, and paid special attention to the generation of explanations. As a question of presuppositions of reasoning, he defined truth as the correspondence of a sign (in particular, a proposition) to its object and, pragmatically, not as any actual consensus of any finite community (i.e., such that to inquire would be to go ask the experts for the answers), but instead as that ideal final opinion which all reasonable scientific intelligences would reach, sooner or later but still inevitably, if they pushed investigation far enough.In tandem he defined the real as a true sign's object (be that object a possibility or quality, or an actuality or brute fact, or a necessity or norm or law), which is what it is independently of any finite community's opinion and, pragmatically, has dependence only on the ideal final opinion. That is an opinion as far or near as the truth itself to you or me or any finite community of minds. Thus his theory of inquiry boils down to "do the science." He characterized the scientific method as follows:
1. Abduction (or retroduction). Generation of explanatory hypothesis. From abduction, Peirce distinguishes induction as inferring, on the basis of tests, the proportion of truth in the hypothesis. Every inquiry, whether into ideas, brute facts, or norms and laws, arises as a result of surprising observations in the given realm or realms, and the pondering of the phenomenon in all its aspects in the attempt to resolve the wonder. All explanatory content of theories is reached by way of abduction, the most insecure among modes of inference. Induction as a process is far too slow for that job, so economy of research demands abduction, whose modicum of success depends on one's being somehow attuned to nature, by dispositions learned and, some of them, likely inborn. Abduction has general justification inductively in that it works often enough and that nothing else works, at least not quickly enough when science is already properly rather slow, the work of indefinitely many generations. Peirce calls his pragmatism "the logic of abduction. His Pragmatic Maxim is: "Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of the object". His pragmatism is a method of sorting out conceptual confusions by equating the meaning of any concept with the conceivable practical consequences of whatever it is which the concept portrays. It is a method of experimentational mental reflection arriving at conceptions in terms of conceivable confirmatory and disconfirmatory circumstances — a method hospitable to the generation of explanatory hypotheses, and conducive to the employment and improvement of verification to test the truth of putative knowledge. Given abduction's dependence on mental processes not necessarily conscious and deliberate but, in any case, attuned to nature, and given abduction's being driven by the need to economize the inquiry process, its explanatory hypotheses should be optimally simple in the sense of "natural" (for which Peirce cites Galileo and which Peirce distinguishes from "logically simple"). Given abduction's insecurity, it should have consequences with conceivable practical bearing leading at least to mental tests, and, in science, lending themselves to scientific testing.
2. Deduction. Analysis of hypothesis and deduction of its consequences in order to test the hypothesis. Two stages:
3. Induction. The long-run validity of the rule of induction is deducible from the principle (presuppositional to reasoning in general) that the real is only the object of the final opinion to which adequate investigation would lead In other words, if there were something to which an inductive process involving ongoing tests or observations would never lead, then that thing would not be real. Three stages:
Many subspecialties of applied logic and computer science, such as artificial intelligence, machine learning, computational learning theory, inferential statistics, and knowledge representation, are concerned with setting out computational, logical, and statistical frameworks for the various types of inference involved in scientific inquiry. In particular, they contribute hypothesis formation, logical deduction, and empirical testing. Some of these applications draw on measures of complexity from algorithmic information theory to guide the making of predictions from prior distributions of experience, for example, see the complexity measure called the speed prior from which a computable strategy for optimal inductive reasoning can be derived.
We find ourselves in a world that is not directly understandable. We find that we sometimes disagree with others as to the facts of the things we see in the world around us, and we find that there are things in the world that are at odds with our present understanding. The scientific method attempts to provide a way in which we can reach agreement and understanding. A "perfect" scientific method might work in such a way that rational application of the method would always result in agreement and understanding; a perfect method would arguably be algorithmic, and so not leave any room for rational agents to disagree. As with all philosophical topics, the search has been neither straightforward nor simple. Logical Positivist, empiricist, falsificationist, and other theories have claimed to give a definitive account of the logic of science, but each has in turn been criticized.
Thomas Samuel Kuhn examined the history of science in his The Structure of Scientific Revolutions, and found that the actual method used by scientists differed dramatically from the then-espoused method. His observations of science practice are essentially sociological and do not speak to how science is or can be practiced in other times and other cultures.
Imre Lakatos and Thomas Kuhn have done extensive work on the "theory laden" character of observation. Kuhn (1961) said the scientist generally has a theory in mind before designing and undertaking experiments so as to make empirical observations, and that the "route from theory to measurement can almost never be traveled backward". This implies that the way in which theory is tested is dictated by the nature of the theory itself, which led Kuhn (1961, p. 166) to argue that "once it has been adopted by a profession ... no theory is recognized to be testable by any quantitative tests that it has not already passed".
Paul Feyerabend similarly examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argues that scientific progress is not the result of applying any particular method. In essence, he says that "anything goes", by which he meant that for any specific methodology or norm of science, successful science has been done in violation of it. Criticisms such as his led to the strong programme, a radical approach to the sociology of science.
In his 1958 book, Personal Knowledge, chemist and philosopher Michael Polanyi (1891-1976) criticized the common view that the scientific method is purely objective and generates objective knowledge. Polanyi cast this view as a misunderstanding of the scientific method and of the nature of scientific inquiry, generally. He argued that scientists do and must follow personal passions in appraising facts and in determining which scientific questions to investigate. He concluded that a structure of liberty is essential for the advancement of science - that the freedom to pursue science for its own sake is a prerequisite for the production of knowledge through peer review and the scientific method.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between the postmodernist and realist camps. Whereas postmodernists assert that scientific knowledge is simply another discourse (note that this term has special meaning in this context) and not representative of any form of fundamental truth, realists in the scientific community maintain that scientific knowledge does reveal real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate method of deriving truth.
The primary constraints on contemporary western science are:
It has not always been like this: in the old days of the "gentleman scientist" funding (and to a lesser extent publication) were far weaker constraints.
Both of these constraints indirectly bring in a scientific method — work that too obviously violates the constraints will be difficult to publish and difficult to get funded. Journals do not require submitted papers to conform to anything more specific than "good scientific practice" and this is mostly enforced by peer review. Originality, importance and interest are more important - see for example the author guidelines for Nature.
Criticisms (see Critical theory) of these restraints are that they are so nebulous in definition (e.g. "good scientific practice") and open to ideological, or even political, manipulation apart from a rigorous practice of a scientific method, that they often serve to censor rather than promote scientific discovery. Apparent censorship through refusal to publish ideas unpopular with mainstream scientists (unpopular because of ideological reasons and/or because they seem to contradict long held scientific theories) has soured the popular perception of scientists as being neutral or seekers of truth and often denigrated popular perception of science as a whole.
The development of the scientific method is inseparable from the history of science itself. Ancient Egyptian documents, such as early papyri, describe methods of medical diagnosis. In ancient Greek culture, the method of empiricism was described. The first experimental scientific method was developed by Muslim scientists, who introduced the use of experimentation and quantification to distinguish between competing scientific theories set within a generally empirical orientation, which emerged with Alhazen's optical experiments in his Book of Optics (1021). The modern scientific method crystallized no later than in the 17th and 18th centuries. In his work Novum Organum (1620) — a reference to Aristotle's Organon — Francis Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Then, in 1637, René Descartes established the framework for a scientific method's guiding principles in his treatise, Discourse on Method. The writings of Alhazen, Bacon and Descartes are considered critical in the historical development of the modern scientific method.
In the late 19th century, Charles Sanders Peirce proposed a schema that would turn out to have considerable influence in the development of current scientific method generally. Peirce accelerated the progress on several fronts. Firstly, speaking in broader context in "How to Make Our Ideas Clear" (1878), Peirce outlined an objectively verifiable method to test the truth of putative knowledge on a way that goes beyond mere foundational alternatives, focusing upon both deduction and induction. He thus placed induction and deduction in a complementary rather than competitive context (the latter of which had been the primary trend at least since David Hume, who wrote in the mid-to-late 18th century). Secondly, and of more direct importance to modern method, Peirce put forth the basic schema for hypothesis/testing that continues to prevail today. Extracting the theory of inquiry from its raw materials in classical logic, he refined it in parallel with the early development of symbolic logic to address the then-current problems in scientific reasoning. Peirce examined and articulated the three fundamental modes of reasoning that, as discussed above in this article, play a role in inquiry today, the processes that are currently known as abductive, deductive, and inductive inference. Thirdly, he played a major role in the progress of symbolic logic itself — indeed this was his primary specialty.
Karl Popper denied the existence of evidence and of scientific method. Popper holds that there is only one universal method, the negative method of trial and error. It covers not only all products of the human mind, including science, mathematics, philosophy, art and so on, but also the evolution of life.
Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines can clearly distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proven; at such a stage, that statement would be called a conjecture. But when a statement has attained mathematical proof, that statement gains a kind of immortality which is highly prized by mathematicians, and for which some mathematicians devote their lives.
Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture is in the process of being proven, using time as a mathematical concept, in which objects can flow (see Ricci flow).
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, The Unreasonable Effectiveness of Mathematics in the Natural Sciences, is a very well-known account of the issue from a Nobel Prize physicist. In fact, some observers (including some well known mathematicians such as Gregory Chaitin, and others such as Lakoff and Nunez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.
George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that mathematical method and scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
|Mathematical method||Scientific method|
|1||Understanding||Characterization from experience and observation|
|2||Analysis||Hypothesis: a proposed explanation|
|3||Synthesis||Deduction: prediction from the hypothesis|
|4||Review/Extend||Test and experiment|
In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.
Film: The Way of the Gun Is the First Great Action Picture of the Year - a Cool New Take on the Heist Movie; the Way of the Gun (18, 119mins) Duets (15, 111mins) Gun Shy (15, 101mins) Little Nicky (12, 90mins)
Nov 17, 2000; Byline: Jonathan Ross I see a lot of movies, obviously. It's both the best and the worst thing about hosting Film 2000 and doing...