The problem with defining order of magnitude

The order of magnitude of a number gives an idea about the size of the number. It's a frequently employed concept in mathematics, physics, and other sciences — whenever we have to deal with very large numbers. For the most part, it's used to see how a big number compares with another big number. For instance, if two large numbers have the same order of magnitude, we can infer they have roughly the same size.

The average Asian elephant is two orders of magnitude heavier than the average human.
(Image via Pixabay)

In a general sense, the order of magnitude of a number is the smallest power of 10 required to represent the number. For example, the order of magnitude of the number 89 is 2 — for the simple reason that 100, or 102, is the smallest multiple of 10 that best represents 89. But, as we will discover below, there's a problem with this line of reasoning.

The order of magnitude appears to lack a universal definition. Many people have defined it in many ways. For instance, Principles of Physics — an acclaimed high school textbook in Nepal — attempts to create the following mathematical formalism:[1]

"To find out of the order of magnitude of a number $$$N$$$, we first express it as
$$N = a \times 10^{b},$$
wherein $$$0.5<a\leq 5$$$. Then, $$$b$$$ is the order of magnitude of the number."

Although this definition seems very reasonable at first glance, it is not without problem. Moreover, this definition is unlike how the order of magnitude is defined in more prominent places, such as the Wikipedia. The Wikipedia page on the subject currently defines the order of magnitude of a number as "an approximate measure of the size of the number, equal to the logarithm (base 10) rounded to a whole number." For example, the order of magnitude of 89 is 2 because the common logarithm of 89 is about 1.95, which rounds to 2.

Insofar as we're concerned with numbers that are relatively close to a multiple of 10, which we can term extreme numbers (numbers such as 10, 20, 80, 90, etc.), these two definitions seem consistent with each other. However, as we move into the medial region, we get numbers such as 40, 50, etc. — which we can call central numbers — that behave like odd ducks. This is where the mutual incompatibility of the two definitions becomes evident. Consider, for instance, the central number 40. In light of the first definition, its order of magnitude is 1 because 40 is expressed as 4 × 101. However, going by the second definition, the order of magnitude of 40 is 2 — given that the common logarithm is 1.60, for which the nearest whole number is 2.

Intuitively, it seems to us that 40 is closer to 10 than 100, which appears to warrant the order of magnitude of 1. But, as the log definition shows, 40 is actually closer to 100 than 10. So should we follow our intuition or ditch it? I think there are better reasons why we should ditch it. First of all, the logarithmic definition is more natural and less of a hassle in the sense we don't have to set arbitrary dividing lines at 0.5 and 5. In addition, because it involves a common logarithm, the logarithmic definition also gives the sense that the concept of order of magnitude is tied to the exponent of the number 10.  

Comments