Top Qs
Timeline
Chat
Perspective

History of the metre

From Wikipedia, the free encyclopedia

History of the metre
Remove ads

During the French Revolution, the traditional units of measure were to be replaced by consistent measures based on natural phenomena. As a base unit of length, scientists had favoured the seconds pendulum (a pendulum with a half-period of one second) one century earlier, but this was rejected as it had been discovered that this length varied from place to place with local gravity. A new unit of length, the metre was introduced – defined as one ten-millionth of the shortest distance from the North Pole to the equator passing through Paris, assuming an Earth flattening of 1/334.

Thumb
Replicas of historical metric standards, including an iron copy of the mètre des Archives.

Following the arc measurement of Delambre and Méchain, the historical French official standard of the metre was made available in the form of the Mètre des Archives, a platinum bar held in Paris. During the mid nineteenth century, following the American Revolution and independence of Latin America, the metre gained adoption in Americas, particularly in scientific usage, and it was officially established as an international measurement unit by the Metre Convention of 1875 at the beginning of the Second Industrial Revolution.

The Mètre des Archives and its copies such as the Committee Meter were replaced from 1889 at the initiative of the International Geodetic Association by thirty platinum-iridium bars kept across the globe.[1] A better standardisation of the new prototypes of the metre and their comparison with each other and with the historical standard involved the development of specialised measuring equipment and the definition of a reproducible temperature scale.[2]

In collaboration with the International Geodetic Association created to measure the Earth, the International Bureau of Weights and Measures became the world reference center for the measurement of geodetic bases thanks to the discovery of invar, an alloy of nickel and iron with a coefficient of thermal expansion close to zero.[3][4]

Progress in science finally allowed the definition of the metre to be dematerialised; thus in 1960 a new definition based on a specific number of wavelengths of light from a specific transition in krypton-86 allowed the standard to be universally available by measurement. In 1983 this was updated to a length defined in terms of the speed of light; this definition was reworded in 2019:[5]

The metre, symbol m, is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum c to be 299792458 when expressed in the unit m⋅s−1, where the second is defined in terms of the caesium frequency ΔνCs.

Where older traditional length measures are still used, they are now defined in terms of the metre – for example the yard has since 1959 officially been defined as exactly 0.9144 metre.[6]

Remove ads

Background

Summarize
Perspective

Historically, units of measurement varied greatly, even when called by the same name. Some kingdoms and other polities standardised some measurements, but in others, such as France before the French Revolution, units could still vary from place to place. During the Scientific Revolution, various "universal measures" of length were proposed which would be based on reproducible natural phenomena, in particular the pendulum and the Earth.

The seconds pendulum

In around 1602, Galileo observed that the regular swing of the pendulum depended on its length.[7] In 1645 Giovanni Battista Riccioli determined the length of a pendulum whose swing is one second each way, a "seconds pendulum".[8][a] In 1671, Jean Picard proposed this length as a unit of measurement to be called the Rayon Astronomique (astronomical radius).[9][10][11] In 1675, Tito Livio Burattini suggested calling it metro cattolico (universal measure).[12] However in 1671–1673, astronomer Jean Richer discovered that the length of a seconds pendulum also varies from place to place by as much as 0.28%.[13][11]

The Earth

In the 18th century, the French Academy of Sciences organised work on cartography and geodesy which included measuring the size and shape of the Earth.[14] Through surveys in Ecuador and Lapland it was found that the earth is not a perfect sphere but rather an oblate spheroid.[15]

Decimals

Using a decimal scale for measurements was proposed by Simon Stevin, a Flemish mathematician in 1586.[citation needed]

Remove ads

Mètre des Archives

Summarize
Perspective

In 1790, during the French Revolution, the National Convention tasked the French Academy of Sciences with reforming the units of measurement. The Academy formed a commission, which rejected using the pendulum as a unit of length[16] and decided that the new measure should be equal to one ten-millionth of the distance from the North Pole to the Equator (a quadrant of the Earth's circumference). This was to be measured along the meridian passing through the Panthéon in Paris.[17][18]

Thumb
A copy of the "provisional" metre located in the wall of a building, 15 rue de Vaugirard, Paris.[19]

However, pending completion of that work, a measurement from Dunkirk to Collioure made in 1740 was used, and following legislation on 7 April 1795,[20] provisional metal metre bars were distributed in France in 1795-1796.[21]

Thumb
An early definition of the metre was one ten-millionth of the Earth quadrant, the distance from the North Pole to the Equator, after the arc measurement of Delambre and Méchain.

In 1799, the measurement of part of the meridian, from Dunkirk to Barcelona, was completed and a correction for the Earth's non-spherical shape calculated from that and another survey.[22][23] A metre bar was accordingly made of platinum and designated by law as the primary standard metre. This was kept in the National Archives and known as the Mètre des Archives.[24]

Another platinum metre, calibrated against the Mètre des Archives, and twelve iron ones were made as secondary standards.[25]

Adoption

One of the iron metre standards was brought to the United States in 1805.[26] It became known as the Committee Meter in the United States and served as a standard of length in the United States Coast Survey until 1890.[27][26][28][29]

In 1855, the Dufour map (French: Carte Dufour), the first topographic map of Switzerland for which the metre was adopted as the unit of length, won the gold medal at the Exposition Universelle.[30][31] On the sidelines of the Exposition Universelle (1855) and the second Congress of Statistics held in Paris, an association with a view to obtaining a uniform decimal system of measures, weights and currencies was created in 1855.[32] A Committee for Weights and Measures and Monies (French: Comité des poids, mesures et monnaies) was created during the Exposition Universelle (1867) in Paris and called for the international adoption of the metric system.[3][32]

In the United States, the Metric Act of 1866 allowed the use of the metre in the United States,[33] and in 1867 the General Conference of the European Arc Measurement (German: Europäische Gradmessung) established the International Bureau of Weights and Measures.[34][35]

At the Metre Convention of 1875 the metre was adopted as an international scientific unit of length.

Remove ads

International prototype metre

Summarize
Perspective
Thumb
Closeup of National Prototype Metre Bar No. 27, made in 1889 by the International Bureau of Weights and Measures (BIPM) in collaboration with Johnson Mattey and given to the United States,[36] which served as the standard for American cartography from 1890 replacing the Committee Meter, an authentic copy of the Mètre des Archives produced in 1799 in Paris, which Ferdinand Rudolph Hassler had brought to the United States in 1805.[26]

In the late nineteenth century, a new international standard metre, called a "prototype",[b] was made along with copies to serve as national standards. It was a "line standard", with the metre was defined as the distance between two lines marked on the bar, to make any wear at the ends irrelevant.[37][38]

The construction was at the limits of technology. The bars were made of a special alloy, 90% platinum and 10% iridium, significantly harder than pure platinum, and have a special X-shaped cross section (a "Tresca section", named after French engineer Henri Tresca) to minimise the effects of torsional strain during length comparisons.[6][38] The first castings proved unsatisfactory, and the job was given to the London firm of Johnson Matthey who succeeded in producing thirty bars to the required specification. One of these, No. 6, was determined to be identical in length to the mètre des Archives, and was designated the international prototype metre at the first meeting of the CGPM in 1889. The other bars, duly calibrated against the international prototype, were distributed to the signatory nations of the Metre Convention for use as national standards.[39] For example, the United States received No. 27 with a calibrated length of 0.9999984 m ± 0.2 μm (1.6 μm short of the international prototype).[40][32]

As bar lengths vary with temperature, precise measurements required known and stable temperatures and could even be affected by a scientist's body heat,[41] so standard metres were provided with precise thermometers. [42]

The first (and only) follow-up comparison of the national standards with the international prototype was carried out between 1921 and 1936,[6][39] and indicated that the definition of the metre was preserved to within 0.2 μm.[43] At this time, it was decided that a more formal definition of the metre was required (the 1889 decision had said merely that the "prototype, at the temperature of melting ice, shall henceforth represent the metric unit of length"), and this was agreed at the 7th CGPM in 1927.[44]

The unit of length is the metre, defined by the distance, at 0°, between the axes of the two central lines marked on the bar of platinum–iridium kept at the Bureau International des Poids et Mesures and declared Prototype of the metre by the 1st Conférence Générale des Poids et Mesures, this bar being subject to standard atmospheric pressure and supported on two cylinders of at least one centimetre diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.

These support locations are at the Bessel points of the prototype  the support points, separated by 0.5594 of the total length of the bar,[45] that minimise shortening of the bar due to bending under its own weight.[46] Because the prototype is a line standard, its full length is 102 cm, slightly longer than 1 metre.[47][48] Cross-sectionally, it measures 16 mm × 16 mm.[49]

Remove ads

From standard bars to wavelength of light

Summarize
Perspective

Charles Sanders Peirce's work promoted the advent of American science at the forefront of global metrology. Alongside his intercomparisons of artifacts of the metre and contributions to gravimetry through improvement of reversible pendulum, Peirce was the first to tie experimentally the metre to the wave length of a spectral line. According to him the standard length might be compared with that of a wave of light identified by a line in the solar spectrum. Albert Abraham Michelson soon took up the idea and improved it.[50][51]

Interferometric options

Thumb
A Krypton-86 lamp used to define the metre between 1960 and 1983

The first interferometric measurements carried out using the international prototype metre were those of Albert A. Michelson and Jean-René Benoît (1892–1893)[52] and of Benoît, Fabry and Perot (1906),[53] both using the red line of cadmium. These results, which gave the wavelength of the cadmium line (λ ≈ 644 nm), led to the definition of the ångström as a secondary unit of length for spectroscopic measurements, first by the International Union for Cooperation in Solar Research (1907)[54] and later by the CIPM (1927).[39][55] Michelson's work in "measuring" the prototype metre to within 110 of a wavelength (< 0.1 μm) was one of the reasons for which he was awarded the Nobel Prize in Physics in 1907.[6][39][56]

By the 1950s, interferometry had become the method of choice for precise measurements of length, but there remained a practical problem imposed by the system of units used. The natural unit for expressing a length measured by interferometry was the ångström, but this result then had to be converted into metres using an experimental conversion factor – the wavelength of light used, but measured in metres rather than in ångströms. This added an additional measurement uncertainty to any length result in metres, over and above the uncertainty of the actual interferometric measurement.

The solution was to define the metre in the same manner as the angstrom had been defined in 1907, that is in terms of the best interferometric wavelength available. Advances in both experimental technique and theory showed that the cadmium line was actually a cluster of closely separated lines, and that this was due to the presence of different isotopes in natural cadmium (eight in total). To get the most precisely defined line, it was necessary to use a monoisotopic source and this source should contain an isotope with even numbers of protons and neutrons (so as to have zero nuclear spin).[6]

Several isotopes of cadmium, krypton and mercury both fulfil the condition of zero nuclear spin and have bright lines in the visible region of the spectrum.

Krypton standard

Krypton is a gas at room temperature, allowing for easier isotopic enrichment and lower operating temperatures for the lamp (which reduces broadening of the line due to the Doppler effect), and so it was decided to select the orange line of krypton-86 (λ ≈ 606 nm) as the new wavelength standard.[6][57]

Accordingly, the 11th CGPM in 1960 agreed a new definition of the metre:[44]

The metre is the length equal to 1 650 763.73 wavelengths in vacuum of the radiation corresponding to the transition between the levels 2p10 and 5d5 of the krypton 86 atom.

The measurement of the wavelength of the krypton line was not made directly against the international prototype metre; instead, the ratio of the wavelength of the krypton line to that of the cadmium line was determined in vacuum. This was then compared to the 1906 Fabry–Perot determination of the wavelength of the cadmium line in air (with a correction for the refractive index of air).[6][43] In this way, the new definition of the metre was traceable to both the old prototype metre and the old definition of the angstrom.

Speed of light standard

Thumb
A helium–neon laser at the Kastler-Brossel Laboratory at Univ. Paris 6

The krypton-86 discharge lamp operating at the triple point of nitrogen (63.14 K, −210.01 °C) was the state-of-the-art light source for interferometry in 1960, but it was soon to be superseded by a new invention: the laser, of which the first working version was constructed in the same year as the redefinition of the metre.[58] Laser light is usually highly monochromatic, and is also coherent (all the light has the same phase, unlike the light from a discharge lamp), both of which are advantageous for interferometry.[6]

The shortcomings of the krypton standard were demonstrated by the measurement of the wavelength of the light from a methane-stabilised helium–neon laser (λ ≈ 3.39 μm). The krypton line was found to be asymmetrical, so different wavelengths could be found for the laser light depending on which point on the krypton line was taken for reference.[c] The asymmetry also affected the precision to which the wavelengths could be measured.[59][60]

Developments in electronics also made it possible for the first time to measure the frequency of light in or near the visible region of the spectrum,[further explanation needed] instead of inferring the frequency from the wavelength and the speed of light. Although visible and infrared frequencies were still too high to be directly measured, it was possible to construct a "chain" of laser frequencies that, by suitable multiplication, differ from each other by only a directly measurable frequency in the microwave region. The frequency of the light from the methane-stabilised laser was found to be 88.376 181 627(50) THz.[59][61]

Independent measurements of frequency and wavelength are, in effect, a measurement of the speed of light (c = ), and the results from the methane-stabilised laser gave the value for the speed of light with an uncertainty almost 100 times lower than previous measurements in the microwave region. Or, somewhat inconveniently, the results gave two values for the speed of light, depending on which point on the krypton line was chosen to define the metre.[d] This ambiguity was resolved in 1975, when the 15th CGPM approved a conventional value of the speed of light as exactly 299 792 458 m s−1.[62]

Nevertheless, the infrared light from a methane-stabilised laser was inconvenient for use in practical interferometry. It was not until 1983 that the chain of frequency measurements reached the 633 nm line of the helium–neon laser, stabilised using molecular iodine.[63][64] That same year, the 17th CGPM adopted a definition of the metre, in terms of the 1975 conventional value for the speed of light:[65]

The metre is the length of the path travelled by light in vacuum during a time interval of 1299,792,458 of a second.

This definition was reworded in 2019:[5]

The metre, symbol m, is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum c to be 299792458 when expressed in the unit m⋅s−1, where the second is defined in terms of the caesium frequency ΔνCs.

The concept of defining a unit of length in terms of a time received some comment.[66] In both cases, the practical issue is that time can be measured more accurately than length (one part in 1013 for a second using a caesium clock as opposed to four parts in 109 for the metre in 1983).[55][66] The definition in terms of the speed of light also means that the metre can be realised using any light source of known frequency, rather than defining a "preferred" source in advance. Given that there are more than 22,000 lines in the visible spectrum of iodine, any of which could be potentially used to stabilise a laser source, the advantages of flexibility are obvious.[66]

Remove ads

Summary of definitions since 1798

More information Basis of definition, Date ...
Remove ads

See also

Notes

  1. At the time the second was defined as a fraction of the Earth's rotation time and determined by clocks whose precision was checked by astronomical observations. In 1936 French and German astronomers found that Earth rotation's speed is irregular. Since 1967 atomic clocks define the second. For further information see atomic time.
  2. The term "prototype" does not imply that it was the first in a series and that other standard metres would come after it: the "prototype" metre was the one that came first in the chain of comparisons, the metre to which all other standards were compared.
  3. Taking the point of highest intensity as the reference wavelength, the methane line had a wavelength of 3.392 231 404(12) μm; taking the intensity-weighted mean point ("centre of gravity") of the krypton line as the standard, the wavelength of the methane line is 3.392 231 376(12) μm.
  4. The measured speed of light was 299 792.4562(11) km s−1 for the "centre-of-gravity" definition and 299 792.4587(11) km s−1 for the maximum-intensity definition, with a relative uncertainty ur = 3.5×10−9.
Remove ads

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads