Don’t Assume What You Don’t Know

By The Metric Maven

Bulldog Edition

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”

– Mark Twain

When I first learned about viruses, I found them terrifying. They were as terrifying as any plot in a science fiction novel. They were unimaginably small. The virus shown had a set of “legs” which it could use to attach itself to a cell. It would then inject material through the cell wall from a head that looked like a gem. The cell would then become a zombie and make more viruses until it exploded like a balloon popping and the new viruses were scattered everywhere to find new cells. What I took away from the grade school lesson was that viruses were so much smaller than bacteria, they were like comparing an m&m to a basketball.

During one Summer break in college I read One two three… infinity by George Gamov. The book was published in 1947, but remains a classic. In one section of the book, Gamov explains that bacteria like typhoid fever has an elongated body “about 3 microns (µ) long, and 1/2 µ across, whereas the bacteria of scarlet fever are spherically shaped cells about 2 microns in diameter” A footnote states: “A micron is one thousandth of a millimeter, or 0.0001 cm.” It would not be until 1960 that the micrometer would become an accepted term.

Gamov explains there are a number of diseases such as influenza “..or the so called mosaic-disease in the tobacco plant, where ordinary microscopic observations failed to discover any normal sized bacteria.” Clearly something existed which was transmitting disease, but what was it? “…it was necessary to assume that they were associated with some kind of hypothetical biological carriers, which received the name virus. The use of ultraviolet light and the development of the electron microscope allowed researchers to finally see viruses and describe their structure.”

Above I have reproduced the prose and type used by Gamov. You will note that he just uses the Greek letter µ to represent a micron, which is, of course, a micrometer or µm. Gamov continues:

Remembering that the diameter of one atom is about 0.0003 µ, we conclude that the particle of tobacco-mosaic virus measures only about fifty atoms across, and about a thousand atoms along the axis.

In modern terms:

Typhoid Fever Bacteria:  3000 nm x 500 nm
Scarlett Fever Bacteria: 2000 nm in diameter

Tobacco Mosaic Virus:  280 nm x 150 nm
Influenza Virus:  100 nm across.

Atomic diameter: 0.3 nm diameter

Excellent Graphic from New Scientist “Pandora Challenges the Meaning of Life” (2013-07-13) pg 10 all in nanometers — click to enlarge

Viruses are clearly much much smaller than bacteria, and much much harder to study. But then in 2003, an organism which first appeared to be bacteria, because a common test indicated it was, was instead discovered to be a virus. It was dubbed a Mimivirus or microbe-mimicking virus. This virus has a diameter of 750 nanometers. This is just plain gigantic. Suddenly researchers found “giant” viruses everywhere! Why was this? These viruses are really big—and we have much better microscopes and tools than existed in 1947, how could it be that only in the 21st century anyone identified them? An American Scientist article from 2011 entitled Giant Viruses explains:

Most giant viruses have only been discovered and characterized in the past few years. There are several reasons why these striking biological entities remained undetected for so long. Among the most consequential is that the classic tool for isolating virus particles is filtration through filters with pores of 200 nanometers. With viruses all but defined as replicating particles that occur in the filtrate of this treatment, giant viruses were undetected over generations of virology research. (Mimivirus disrupted this evasion tactic by being so large it was visible under a light microscope.)

It was assumed that viruses were smaller than 200 nanometers, so they were filtered out. No one even thought to look for them because it was something “what you know for sure that just ain’t so.” that precluded researchers from seeing this virus for over fifty years. In 2014 researchers resurrected a 30 000 year old virus from Siberian permafrost which is the largest thus far discovered. It is called pithovirus sibericum and measures 1500 nm in length and 500 nm in width, which approaches the dimensions of the Typhoid and Scarlett fever bacteria cited by Gamov.

In the 1980s, ground based British researchers in Antarctica noticed a large depletion of ozone above them. The numbers were so low, there was concern that the instrumentation was faulty. Satellite measurements did not reveal the ozone depletion which caused a considerable conundrum. The numbers were so low that according to folklore, the computer software had a lower limit and threw out the bad data. The story is difficult to pin down and a bit apocryphal, but it illustrates what I’ve seen in my own career.

When I was first creating computer models of antenna radiation using a computer method called FDTD, I calculated radiation patterns. There are two different values to calculate, one is large and the other smaller.  An antenna was fabricated, and measured based on my analysis. The large value was measured to be almost exactly the value expected. There was a confusion on my part when the analysis showed a clearly defined smaller valued radiation pattern, but the measurements indicated no antenna pattern was present. When I showed the person who had written the program for the measurement chamber this anomaly, he remarked “That’s so low, it’s meaningless, I just set the value to the bottom of the expected range.” I had to convince him to change the computer code so it would output the actual numbers, and to his surprise, the smaller data was far, far more accurate than he had imagined.

This brings me to a ubiquitous metric system fallacy that seems rooted in the heritage of our Ye Olde English Arbitrary Grouping of Weights and Measures or Ye Olde English. This fallacy Sven and myself call “The Implied Precision Fallacy.” It is the idea that one should decide what measurement units are to be used based on a prejudicial notion of what the magnitude is expected to be measured, and the expected measurement error. It also implies that if you measure further than this, you are implying you are measuring to that precision.

This fallacy may have its roots in the past, when the available precision of a measurement device or construction device (like a mill or lathe) might have limited how far down one could measure or fabricate to a given accuracy. If a person had a problem measuring or constructing to a particular precision and accuracy because of tooling limitations, one might be tempted to argue not to bother with places beyond what they thought was possible.

Unfortunately this argument could be turned around into a rationalization that if that’s the best one can do that’s the best one actually needs. I was told many times, by many school teachers to choose medieval units which reflect expected precision, and if I used smaller ones that this was very poor practice. It was overkill. The size of the units chosen would imply the precision of the measurement, so use as large of units as possible. As you see when the size of units are used a priori as an argument of precision, then they are a chosen limitation, and not a well informed limitation. They are in fact a guess.

I have run into many situations in my career where data looks like noise, and then using signal processing, useful information is obtained. The GPS signal you use to guide your car trips is well below the noise level at that frequency. It is like standing in the top row of a football stadium and trying to hold a conversation with a person with a person on the other side of the field also on the top row while the crowd cheers. Impossible—right? Well, perhaps not. Signal processing can do amazing things, but if you argue there is no way to make a measurement precise enough, you will not. It is a psychological self-imposed measurement limit, not a technical one.

This brings me to the measurement of people’s height and the mass of babies. I have been taken to task for arguing that height should be measured in mm, just like lengths are in the Australian construction industry. Some commentators argue that this is too much precision, that a person’s height changes so much that millimeters just have too much precision to have any meaning. There is no use taking measurements which are calibrated to millimeters, because we already know we don’t need them. This is a very platonic argument. It is also nonsense on a number of levels. First, the data itself should reveal where the precision no longer exists. If one can show that a certain set of digits on the right are randomly distributed, then one can obtain an implied measurement precision, but that is not the end of the story. Even digits which appear to be random may contain information which may be extracted. I’ve never seen a situation where measurements have been too precise, and led people to miss an effect, but I have seen situations where they have been masked by truncation. Measuring a person’s height as 1753 mm does not assault good technical practice, it is an example of it. One can always write this value as 1.75 meters immediately just by inspecting the millimeters, but one has taken a simple integer and needlessly introduced a decimal point. The two representations use the same number of symbols.

The grouping of three for numbers appears to be of great utility in our society. From one thousand (1,000) to one million (1,000,000) to one billion (1,000,000,000), these values have been designated in groups of threes long before I made my appearance on this planet.  The breaks in metric prefixes, are at the locations of the commas. In other countries only a space is used above four digits: 1000, 1 000 000, 1 000 000 000 (one can use a space with four digits also—it’s just not my preference).  This is also done in many US numerical analysis references.

Pat Naughtin, in his TEDx Melbourne  lecture on 2010-03-13 discussed a scale which measured the weight (mass) of babies. The baby would wriggle and it would require the device to take large numbers of measurements and statistically extract its mass. The precision and accuracy of this scale was to within a gram. The weight of a baby is supposed to increase with time. A decrease, even a very small one, could indicate a potential health issue.

Should the baby have an infection, accurate knowledge of its mass is important so a properly proportioned amount of medicine can be prescribed. Naughtin points out that yet again there is no measurement policy in this instance, and no one in charge of one. Naughtin argued there is a potential danger when babies are measured in Kilograms, and rounded to the nearest tenth of a Kilogram, which is the accepted practice in Australia. The use of a decimal point, and rounding, creates numbers which are decanted of information. The number is too close to unity for a clear understanding of changes in its magnitude. Using grams allows for one to eliminate fractions—decimal or otherwise—and compare simple integers.

Years ago when I lived in Montana, I encountered shade tree mechanics, small engine mechanics, construction contractors and others. One phrase which seemed to be ubiquitous was:

“He’s the kind of guy that will measure something with a micrometer, mark it with chalk and cut it with an axe.”

It showed a common understanding that over-precision does not hurt one, and a person who would throw it away is not good at his profession, be it mechanic, welder, contractor, or any other skilled vocation. The Australian construction industry has saved large amounts of money by measuring in millimeters. They have no need for a decimal point, and the numbers are simple. The argument that measuring a person’s height in millimeters, or a babies weight in grams is “too precise” is a cultural argument, not a technical one. Arguing that lots of people perform a measurement, or an authority like the EU or the medical profession has endorsed it is an argument from authority.

In my essay Metamorphosis and Millimeters, I point out that for thousands of years people had created bee hives which were made of clay. They had to be destroyed in order to obtain honey. It was only in the 19th Century that an American inventor had the temerity to question this dogma, and created the modern bee-hive. Common usage over a long period of time does not imply that common usage is optimum. This is a version of a technical Darwinism argument that is used by anti-metric people as a straw man cudgel. It has been increasing measurement precision (and accuracy) which has allowed the creation of a modern technical society and is at the forefront of scientific discovery. Arguing otherwise is arguing against all the benefits increased measurement accuracy has provided. There is no “common person’s measurements” and a separate set of “scientific people’s measurements” there are only precise measurements.


If you liked this essay and wish to support the work of The Metric Maven, please visit his Patreon Page and contribute. Also purchase his books about the metric system:

The first book is titled: Our Crumbling Invisible Infrastructure. It is a succinct set of essays  that explain why the absence of the metric system in the US is detrimental to our personal heath and our economy. These essays are separately available for free on my website,  but the book has them all in one place in print. The book may be purchased from Amazon here.


The second book is titled The Dimensions of the Cosmos. It takes the metric prefixes from yotta to Yocto and uses each metric prefix to describe a metric world. The book has a considerable number of color images to compliment the prose. It has been receiving good reviews. I think would be a great reference for US science teachers. It has a considerable number of scientific factoids and anecdotes that I believe would be of considerable educational use. It is available from Amazon here.


The third book is called Death By A Thousand Cuts, A Secret History of the Metric System in The United States. This monograph explains how we have been unable to legally deal with weights and measures in the United States from George Washington, to our current day. This book is also available on Amazon here.

Updated 2015-01-21

3 thoughts on “Don’t Assume What You Don’t Know

  1. I have just had twins in Australia – their weight has been recorded in grams and their length in centimetres. Adult weights are always kilograms though

  2. It is a common misconception that Fahrenheit is more accurate due to greater resolution of numbers. Actually in practice, this is very untrue. Fahrenheit has greater resolution, but not greater precision. Temperature is not homogenous throughout a medium. There are hot spots and cold spots that can vary a degree or two. When we want to know the temperature of a space, we want to know the average, not the specific temperature at a single point.

    The Celsius scale is actually more precise for this . It gives us the “average” for the space in whole numbers. Fahrenheit is too fine and in fact measures a lot of what would be considered thermal noise and not actual temperature. The supposed extra resolution of Fahrenheit doesn’t really provide you with useful information on what your body feels.

    People in metric countries who work daily in degrees Celsius are more apt to be able to feel the Celsius temperature without the use of a thermometer, whereas the majority of Fahrenheit users need a thermometer to ascertain the approximate temperature. The human body can only detect a difference of one Celsius degree.

    Maybe in some fine scientific work, either theoretical or in a laboratory, decimal parts of a degree serve a purpose, and I would think for the use of the kelvin unit only. The Celsius scale was “designed” for common use by the average citizen to know the temperature of his environment and never need be expressed to less than whole numbers.

Comments are closed.