A manometer could also be referring to a pressure measuring instrument, usually limited to measuring pressures near to atmospheric. The term manometer is often used to refer specifically to liquid column hydrostatic instruments.
A vacuum gauge is used to measure the pressure in a vacuum --- which is further divided into two subcategories: high and low vacuum (and sometimes ultra-high vacuum). The applicable pressure range of many of the techniques used to measure vacuums have an overlap. Hence, by combining several different types of gauge, it is possible to measure system pressure continuously from 10 mbar down to 10-11 mbar.
The zero reference in use is usually implied by context, and these words are only added when clarification is needed. Tire pressure and blood pressure are gauge pressures by convention, while atmospheric pressures, deep vacuum pressures, and altimeter pressures must be absolute. Differential pressures are commonly used in industrial process systems. Differential pressure gauges have two inlet ports, each connected to one of the volumes whose pressure is to be monitored. In effect, such a gauge performs the mathematical operation of subtraction through mechanical means, obviating the need for an operator or control system to watch two separate gauges and determine the difference in readings. Moderate vacuum pressures are often ambiguous, as they may represent absolute pressure or gauge pressure without a negative sign. Thus a vacuum of 26 inHg gauge is equivalent to an absolute pressure of 30 inHg (typical atmospheric pressure) − 26 inHg = 4 inHg.
Atmospheric pressure is typically about 100 kPa at sea level, but is variable with altitude and weather. If the absolute pressure of a fluid stays constant, the gauge pressure of the same fluid will vary as atmospheric pressure changes. For example, when a car drives up a mountain, the tire pressure goes up. Some standard values of atmospheric pressure such as 101.325 kPa or 100 kPa have been defined, and some instruments use one of these standard values as a constant zero reference instead of the actual variable ambient air pressure. This impairs the accuracy of these instruments, especially when used at high altitudes.
The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N·m-2 or kg·m-1·s-2). This special name for the unit was added in 1971; before that, pressure in SI was expressed in units such as N/m². When indicated, the zero reference is stated in parenthesis following the unit, for example 101 kPa (abs). The Pounds per square inch (psi) is still in widespread use in the US and Canada, notably for cars. A letter is often appended to the psi unit to indicate the measurement's zero reference; psia for absolute, psig for gauge, psid for differential, although this practice is discouraged by the NIST
Because pressure was once commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g. inches of water). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury's density allows for a shorter column (and so a smaller manometer) to measure a given pressure.
Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When 'millimetres of mercury' or 'inches of mercury' are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. The water-based units usually assume one of the older definitions of the kilogram as the weight of a litre of water.
Although no longer favoured by measurement experts, these manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury in most of the world, and lung pressures in centimeters of water are still common. Natural gas pipeline pressures are measured in inches of water, expressed as '"WC' ('Water Column'). Scuba divers often use a manometric rule of thumb: the pressure exerted by ten metres depth of water is approximately equal to one atmosphere. In vacuum systems, the units torr, micrometre of mercury (micron), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure.
Atmospheric pressures are usually stated using kilopascal (kPa), or atmospheres (atm), except in American meteorology where the hectopascal (hPa) and millibar (mbar) are preferred. In American and Canadian engineering, stress is often measured in kip. Note that stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm-2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre.
Many other hybrid units are used such as mmHg/cm² or grams-force/cm² (sometimes as kg/cm² and g/mol2 without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is forbidden in SI; the unit of force in SI is the newton (N).
While static gauge pressure is of primary importance to determining net loads on pipe walls, dynamic pressure is used to measure flow rates and airspeed. Dynamic pressure can be measured by taking the differential pressure between instruments parallel and perpendicular to the flow. Pitot-static tubes, for example perform this measurement on airplanes to determine airspeed. The presence of the measuring instrument inevitably acts to divert flow and create turbulence, so its shape is critical to accuracy and the calibration curves are often non-linear.
Liquid column gauges consist of a vertical column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight is in equilibrium with the pressure differential between the two ends of the tube. A very simple version is a U-shaped tube half-full of liquid, one side of which is connected to the region of interest while the reference pressure (which might be the atmospheric pressure or a vacuum) is applied to the other. The difference in liquid level represents the applied pressure. The pressure exerted by a column of fluid of height h and density ρ is given by the hydrostatic pressure equation, P = hgρ. Therefore the pressure difference between the applied pressure Pa and the reference pressure Po in a U-tube manometer can be found by solving . If the fluid being measured is significantly dense, hydrostatic corrections may have to be made for the height between the moving surface of the manometer working fluid and the location where the pressure measurement is desired.
Any fluid can be used, but mercury is preferred for its high density(13.534 g/cm³) and low vapour pressure. For low pressure differences well above the vapour pressure of water, water is a commonly-used liquid (and "inches of water" is a commonly-used pressure unit). Liquid column pressure gauges are independent of the type of gas being measured and have a highly linear calibration. They have poor dynamic response. When measuring vacuum, the working liquid may evaporate and contaminate the vacuum if its vapor pressure is too high. When measuring liquid pressure, a loop filled with gas or a light fluid must isolate the liquids to prevent them from mixing. Simple hydrostatic gauges can measure pressures ranging from a few Torr (a few 100 Pa) to a few atmospheres. (Approximately 1,000,000 Pa)
A single-limb liquid-column manometer has a larger reservoir instead of one side of the U-tube and has a scale beside the narrower column. The column may be inclined to further amplify the liquid movement. Based on the use and structure following type of manometers are used
A McLeod gauge isolates a sample of gas and compresses it in a modified mercury manometer until the pressure is a few mmHg. The gas must be well-behaved during its compression (it must not condense, for example). The technique is slow and unsuited to continual monitoring, but is capable of good accuracy.
An important variation is the McLeod gauge which isolates a known volume of vacuum and compresses it to multiply the height variation of the liquid column. The McLeod gauge can measure vacuums as high as 10−6 Torr (0.1 mPa), which is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-controlled properties. These indirect measurements must be calibrated to SI units via a direct measurement, most commonly a McLeod gauge.
A Bourdon gauge uses a coiled tube, which, as it expands due to pressure increase causes a rotation of an arm connected to the tube.
The pressure sensing element is a closed coiled tube connected to the chamber or pipe in which pressure is to be sensed. As the gauge pressure increases the tube will tend to uncoil, while a reduced gauge pressure will cause the tube to coil more tightly. This motion is transferred through a linkage to a gear train connected to an indicating needle. The needle is presented in front of a card face inscribed with the pressure indications associated with particular needle deflections. In a barometer, the Bourdon tube is sealed at both ends and the absolute pressure of the ambient atmosphere is sensed. Differential Bourdon gauges use two Bourdon tubes and a mechanical linkage that compares the readings.
In the following pictures the transparent cover face has been removed and the mechanism removed from the case. This particular gauge is a combination vacuum and pressure gauge used for automotive diagnosis:
A second type of aneroid gauge uses the deflection of a flexible membrane that separates regions of different pressure. The amount of deflection is repeatable for known pressures so the pressure can be determined by using calibration. The deformation of a thin diaphragm is dependent on the difference in pressure between its two faces. The reference face can be open to atmosphere to measure gauge pressure, open to a second port to measure differential pressure, or can be sealed against a vacuum or other fixed reference pressure to measure absolute pressure. The deformation can be measured using mechanical, optical or capacitive techniques. Ceramic and metallic diaphragms are used.
This is also called a capacitance manometer, in which the diaphragm makes up a part of a capacitor. A change in pressure leads to the flexure of the diaphragm, which results in a change in capacitance. These gauges are effective from 10−3 Torr to 10−4 Torr.
Thermionic emission generate electrons, which collide with gas atoms and generate positive ions. The ions are attracted to a suitably biased electrode known as the collector. The current in the collector is proportional to the rate of ionization, which is a function of the pressure in the system. Hence, measuring the collector current gives the gas pressure. There are several sub-types of ionization gauge.
Most ion gauges come in two types: hot cathode and cold cathode, a third type exists which is more sensitive and expensive known as a spinning rotor gauge, but is not discussed here. In the hot cathode version an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 Torr to 10−10 Torr. The principle behind cold cathode version is the same, except that electrons are produced in a discharge created by a high voltage electrical discharge. Cold Cathode gauges are accurate from 10−2 Torr to 10−9 Torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement.
A hot cathode ionization gauge is mainly composed of three electrodes all acting as a triode, where the cathode is the filament. The three electrodes are a collector or plate, a filament, and a grid. The collector current is measured in picoamps by an electrometer. The filament voltage to ground is usually at a potential of 30 volts while the grid voltage at 180–210 volts DC, unless there is an optional electron bombardment feature, by heating the grid which may have a high potential of approximately 565 volts. The most common ion gauge is the hot cathode Bayard-Alpert gauge, with a small ion collector inside the grid. A glass envelope with an opening to the vacuum can surround the electrodes, but usually the Nude Gauge is inserted in the vacuum chamber directly, the pins being fed through a ceramic plate in the wall of the chamber. Hot cathode gauges can be damaged or lose their calibration if they are exposed to atmospheric pressure or even low vacuum while hot. The measurements of a hot cathode ionization gauge are always logarithmic.
Electrons emitted from the filament move several times in back and forth movements around the grid before finally entering the grid. During these movements, some electrons collide with a gaseous molecule to form a pair of an ion and an electron (Electron ionization). The number of these ions is proportional to the gaseous molecule density multiplied by the electron current emitted from the filament, and these ions pour into the collector to form an ion current. Since the gaseous molecule density is proportional to the pressure, the pressure is estimated by measuring the ion current.
The low pressure sensitivity of hot cathode gauges is limited by the photoelectric effect. Electrons hitting the grid produce x-rays that produce photoelectric noise in the ion collector. This limits the range of older hot cathode gauges to 10-8 Torr and the Bayard-Alpert to about 10-10 Torr. Additional wires at cathode potential in the line of sight between the ion collector and the grid prevent this effect. In the extraction type the ions are not attracted by a wire, but by an open cone. As the ions cannot decide which part of the cone to hit, they pass through the hole and form an ion beam. This ion beam can be passed on to a
See also: Electron ionization
Such gauges cannot operate if the ions generated by the cathode recombine before reaching the anodes. If the mean-free path of the gas within the gauge is smaller than the gauge's dimensions, then the electrode current will essentially vanish. A practical upper-bound to the detectable pressure is, for a Penning gauge, of the order of 10-3 Torr.
Similarly, cold cathode gauges may be reluctant to start at very low pressures, in that the near-absence of a gas makes it difficult to establish an electrode current - particularly in Penning gauges which use an axially symmetric magnetic field to create path lengths for ions which are of the order of metres. In ambient air suitable ion-pairs are ubiquitously formed by cosmic radiation; in a Penning gauge design features are used to ease the set-up of a discharge path. For example, the electrode of a Penning gauge is usually finely tapered to facilitate the field emission of electrons.
Maintenance cycles of cold cathode gauges is generally measured in years, depending on the gas type and pressure that they are operated in. Using a cold cathode gauge in gases with substantial organic components, such as pump oil fractions, can result in the growth of delicate carbon films and shards within the gauge which eventually either short-circuit the electrodes of the gauge, or impede the generation of a discharge path.