Pardon The Decimal Dust

by The Metric Maven

Early in my time as an undergraduate at a Midwestern university, I discovered that having “too many decimal places” on my homework was a terrible and ignorant trespass on rationality and technical competence. I was told that the decimal values in my calculation, that were only a few places past the decimal marker, were meaningless. As a graduate student, my TA had the “experience” to understand this problem, and the importance of properly rounding numbers. What I’ve generally settled on these days, is to use three places past a decimal point so that I can easily change a metric number to an integer expression, should I want to change the chosen metric prefix. I also have enough years behind me to realize that whatever number of places were chosen by my TA, they were not the product of an error analysis, they were the product of personal preference.

I’ve heard critics of “too many decimal places” call values they believe are insignificant “decimal dust.” There is no clear definition of this rubric. It has been defined as:

WORD SPY

DECIMAL DUST: An inconsequential numerical amount.

What I’ve come to realize, is that a sort of meaningless bifurcated “intellectual” tug-of-war occurs between those who are concerned about “the excessive use of decimals,” and those who see “too much precision.” Historians have noticed that Newton would compute answers out to an excessively unnecessary number of decimal places. Their conclusion is that “he just liked doing the calculation.” This is very probably true. Quite possibly the best engineer I’ve ever shared an office with was Michel. He was able to take the abstract equations of electromagnetism and turn them into useful computer code. We were, of course, banned from presenting any results that interfaced with the Aerospace world in metric, but inside of every computer was, to my knowledge, metric computations hidden away in ones and zeros, so as not to offend delicate Olde English sensibilities.

I was fascinated with understanding the details of how Michel implemented his computer code and verified it. What I noticed immediately was that Michel would hand compute each line of code, and compare it with the computer code’s output at each line. I was surprised that he was carrying the hand calculation out to perhaps ten decimal places! I had always harbored a secret desire to compute out to that many places. It gave me some strange reassurance that my code was right when I did, despite the admonishment I’d received at my University against creating and propagating meaningless “garbage numbers.”

One day I could clearly see that something was bothering Michel, and I just had to know what it was, as what he worked on was usually very interesting. He had some computer code he had not written, that had been used in-house for sometime. He was told to use it to predict the outcome of a measurement. Michel had derived a formula that should have been equivalent to what was implemented in the computer code, but about four or five decimal places out, the values were different. Michel showed me the hand calculation (he checked it three times) and the computer output. In his French accent he said “They should be the same, should they not?” I agreed. We checked the value of physical constants, such as the speed of light. They were all the same. Finally Michel saw the problem, and it was in the code. At certain extremes it would introduce a considerable error into the computation. That was the day I began to always check my hand and computer code computations to at least 5-6 places, minimum. I would learn that one man’s decimal dust is another man’s gold dust.

Indeed, decimal dust can be a source of new scientific knowledge. In the 1960s, Edward Lorenz (1917-2008) had noticed a very interesting output from a non-linear mathematical computer model he was using. He wanted to repeat the computation and input the initial conditions by hand as a short-cut. Lorenz rounded the original input value of 0.506127 to 0.506, a number of decimal places expected to be insignificant, and plenty accurate. When he ran it again, the computation output was nothing like the previous computation after a short period. Changing the input value at the level of “decimal dust” was expected to have no effect on the computation, clearly for the mostly non-linear world we live in, this is not the case. It was Lorenz that coined the now ubiquitous term “the butterfly effect” for sensitivity to initial conditions, and ushered along the science of chaos theory into what it is today. The tiny pressure changes caused from a butterfly flapping its wings in Africa, has the potential to be the seed for a hurricane in the Atlantic ocean. There are cases where non-linear deterministic equations need an infinite number of decimal places for a computation to repeat over all time.

In the early 1980s, British scientists using ground-based measurements reported a large ozone hole had appeared above Antarctica. This was quite surprising as satellite data had not noted the same problem. The computer code for the satellite had “data quality control algorithms” that rejected such values as “unreasonable.” Assumptions about what values are important, and those that are not, are assumptions, and should be understood as such. Another example is “filtered viruses.” It was assumed viruses had to be smaller than a certain dimension, so all other microbes above that size were removed with filter paper. It took decades for researchers to realize that monster size viruses exist. I’ve written about this in my essay Don’t Assume What You Don’t Know.

The a priori assumption of what is important is used as a rhetorical cudgel to suppress “excessive” information. When I’ve argued that human height should be represented in meters or millimeters (preferably millimeters), there is a vast outcry that only the traditional atavistic pseudo-inch known as the centimeter should be used. To use millimeters is, harrumph!, “too much precision.” It is also a possible lost opportunity for researchers as information has been suppressed from the introduction of a capricious assumption. One can always round the offending values down, but obtaining better precision after-the-fact is not an option. In my view, those who use the term decimal dust in a manner other than as a metaphor for tiny, are lazy in their criticisms, and assume they know how many decimal places matter without any familiarity with subject and the values involved.

When long and thoughtful effort is expended, one can introduce the astonishing simplification of using integers, which eschew decimal points entirely. As has been pointed out ad nausium in this blog, using millimeters for housing construction is a measurement environment that is partitioned in a way that allows for this incredible simplification. Pat Naughtin noted that integer values in grams should be used for measuring babies. This produces an intuitive understanding of the amount of weight that a baby gains or loses compared with Kilograms. Grams, millimeters and milliliters are efficient for everyday use. Integer values are the most instinctual numerals for comparison tables. The metric system is beautiful in its ability to provide the most intuitive ways of expressing the values of nature. It is up to us to use it wisely and thoughtfully, instead of dogmatically. In my view, this measurement introspection is sorely lacking in our modern society, and definitely in the community of science writers and researchers.

If you liked this essay and wish to support the work of The Metric Maven, please visit his Patreon Page.

Close Enough for American Work

Rose-QuartzBy The Metric Maven

Metric Day Edition

Recently my Significant Other (SO) requested that I take her to a rock and mineral shop. She had driven past it many times, but never stopped. There were all manner of minerals and small fossils. The displays were all very interesting and it seemed I would be destined to look, but not to make a purchase. Then, I noticed the tools section. There was a common inch ruler with a centimeter scale on its opposite edge. Yet another sad testament to metric “practice” in the US. Nothing unexpected there. To my surprise there was also a small dual-scale caliper with þe olde English and metric, but it had fractional inches on one side and millimeters on the other. I was quite surprised and for $7.50 I could not resist purchasing it. Here is a photo:

Rock_Caliper

Inexpensive Calipers (click to enlarge)

My SO, who had requested we visit, did not make a purchase, but I did. When I arrived at the cash register, an older man with a beard was waiting. I commented on the millimeter scale on the calipers and that it did not have centimeters. The man did not miss a beat and said “We have to have them, all the precious stones are measured in millimeters.”

I looked at the calipers with more care and noticed a lower scale.

Then I said: “Oh my goodness, look at this, it has a vernier scale on the metric side.”

The cashier had no idea what a vernier scale is. I did my best to explain it from memory. I also told him that in my opinion, the creation of the vernier scale was a very important development in the history of engineering and science. I recalled that I had learned to use a vernier scale when using a micrometer during my time working as an offset pressman. It was in Daniel J. Boorstein’s book The Discoverers (pg 400) that I first ran into a historical discussion of the vernier scale. The scale was created by French Mathematician Pierre Vernier (1580–1637) and bears his name.

A U.S. Quarter Dollar coin has a diameter of 24.26 millimeters. I placed a current 25 cent coin into the calipers to measure it. The scale is shown below:

Vernier_Scale_Quarter

Vernier measurement of a US quarter Dollar (click to enlarge)

The first line of the vernier scale indicates the measured length is between 24 and 25 millimeters. We next look for the vernier line that matches up best with the upper scale. The three is probably the closest to the best alignment and so we could estimate that the diameter is about 24.3 mm. This deviation is only 0.04 mm (40 um) from the design value. This is a very close estimate for such a roughly fabricated device.

Let’s look at upper inch scale. Oh, my, there is no vernier scale. It would appear that because it is graduated in fractions, a vernier scale was not added. The smallest fractional division appears to be in sixteenths of an inch or 0.0625 inch. When converted to a useful metric unit, 1/16″ is 1.588 mm. It also appears that it would be rather easy to confuse the smallest division with 1/8″ rather than 1/16″ and this would complicate the inch measurement considerably. The mark half-way between zero and 1/2 has about the same downward length as 1/8. This is not the “standard” way that U.S. rulers are marked. Below is an example with the first inch divided down to 1/32″ and the second inch down to 1/16″.

Fractional-DivisionsIf you want to know why the divisions are different for the first inch, I invite you to read my blog The Design of Everyday Rulers. This odd set of graduations caused me to wonder if the calipers had been manufactured in a metric country by a person who is not familiar with our complex þe olde English practices. That the word meter is employed, with an er rather than an re, makes one suspicious that an American was behind this muddled design. Clearly the calipers makes measurements in millimeters and not meters. Why use the word meters rather than millimeters or metric?

Assuming we have figured out the inch fractions on the caliper, we see about 15/16″ and “a little more.” How much more is this? Well, because we do not have a vernier scale, we have to estimate the value by eye.  It looks like it maybe about 1/10th of a 1/16″ space if I have to guess—which I do. So what is 1/10th of 1/16″ to divide fractions we invert and multiply as I was told as a youth. We end up with 10/16 — that can’t be right. Oh wait we need to divide 1/16 into 10 parts or 1/16 divided by 10/1. When we invert and multiply now we get 1/160. Now we need a common denominator to add the fractions and obtain a final value. We multiply the top and bottom of 15/16″ by 10 and have 150/160. We now can add 150/160 + 1/160 to get 151/160 inches. Now we can make this fraction a decimal and get 0.94375 inches or, when converted to millimeters, it becomes 23.97 mm which we can compare with the vernier value of 24.3 mm directly. Even after all of that work and estimating, the vernier scale with millimeters is more accurate, and DEFINITELY simpler to read.

Today we have mechanical dial calipers, and also calipers with electronic readouts; but a vernier scale with millimeters is still an accurate and simple way to measure length. This example also illustrates the inaccurate and complicated way we in the US measure with fractional inches. We have not even bothered to decimalize the inch on our common everyday rulers. I have a proposal, let’s just switch-over to the metric system directly, and skip a kludge like decimal inches for the streamlined system that uses millimeters.

If you liked this essay and wish to support the work of The Metric Maven, please visit his Patreon Page

                                                                   ***

The Metric Maven has published a new book titled The Dimensions of The Cosmos. It examines the basic quantities of the world from yocto to Yotta with a mixture of scientific anecdotes and may be purchased here.

perf6.000x9.000.indd