TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

Rooting is not the inverse of exponentiating!

I read someone spreading this lie again the other day, and it annoyed me. Consider addition whose inverse operation is subtraction. The additive inverse is typically expressed like so: a + (-a) = 0, and we make this shorthand to a - a = 0. All subtraction is, is inversed addition.

Consider multiplication, which can be thought of (but isn't necessarily) iterated addition. The multiplicative inverse is expressed like: a * 1/a = 1. In other words, division is multiplication's inverse. (Why is multiplication not necessarily iterated addition? Well, if the numbers are discrete, you're okay. But explain how you can take 2 * pi and add two with itself pi times. 2 * 1.5 is adding two with itself one and a half times (which already sounds strange), in other words take one two, then add it with half of a two. The iterated view isn't necessarily wrong, it's just not helpful after a point.)

Now consider exponentiation, which is often thought of (but isn't necessarily) iterated multiplication. 3^2 is 3 * 3 which is 9, 3^3 is 3 * 3 * 3 which is 27. What's the inverse you ask? Well, for 3^2, take the square root! For 3^3, take the cube root! For x^2 = y, take the square root of (known) y to find (unknown) x! (So long as x and y aren't negative, 'cause we're afraid of complex numbers.) This is all true so far. For 3^x = y, where x is known, the inverse operation to solve for y is indeed the "xth root", which will always be an actual known number. How about when y is known and x isn't? For 3^x = 27, take the... literal xth root... Wait...

Uh... There's the problem. What does it mean to take the xth root of something when you don't know x? Let's look at what we're actually doing with square rooting and cube rooting: unfortunately it really hurts the view of exponentiation as iterated multiplication.

[math]
3 = \sqrt{9} = \sqrt{3^2} = (3^2)^{\frac{1}{2}} = 3^1
[/math]

It's a known reduction that (a^b)^c = a^(b*c), thus 2 * 1/2 = 1.

What's x^(1/2)? Why, it's the square root of x. x^(1/3) is the cubed root of x. This is why in programming you don't really need a square root function or a cubed root function, just a pow function that can take floats.

Try and phrase it using iterated multiplication. 9^(1/2) means... 9 multiplied with itself one half times? Huh? Half of 9 is 4.5, how can I multiply 9...

You see, the iterated view fails here. (If you can make it work, let me know in the comments.) Also remember that 9^0 = 1.

The inverse of addition is subtraction, and the inverse of multiplication is division, but subtracting is just a form of addition and dividing is just a form of multiplication. Taking a root is just a form of exponentiation (in fractional form), so why shouldn't it be the inverse?

If I give you a problem: 2 * x = 5, you can solve for x. You use the inverse of multiplication and multiply both sides by 1/2 (or divide by 2), and we see x = 5/2. I also give you the problem x^2 = 9, and you what? You take the square root of both sides: you raise each side by a factor of 1/2. (x^2)^(1/2) = x (assume x is known to be positive), and 9^(1/2) = 3. x = 3. (How do you know 3 is the square root of 9? Because you already know 3^2 = 9, thus (3^2)^(1/2) = 3^1, or your calculator told you. How does your calculator tell you? Magic.) You think all's dandy, that you have an inverse for exponentiation. But the inverse operation you used was not on exponentiation itself, it was on squaring. So now I give you the problem I mentioned above: 3^x = 27.

You know the answer is 3, but how to get there? You can try blindly plugging in values, but that's not very helpful. Your calculator doesn't have an "xth root" button on it, and you don't see how (assuming we somehow know x to be non-zero) 3 = 27^(1/x) helps you in finding the value of x. You're stuck. I thought you knew the inverse of exponentiation? No, you don't.

[math]
\log_b(a) = c\ if\ b^c = a
[/math]

The logarithm is the inverse of exponentiation. What the above is read as, is "The log with base b that produces value a is c, if b^c = a."

[math]
\log_3(9) = 2
[/math]

The log with base 3 that produces 9 is 2. In other words, if you have base 3 and raise it to the second power, you get 9. So go back to the problem of 3^x = 27. Take the log (base 3) of both sides. The left side magically becomes x (I'm not going to go into the depths of logarithms), and the right side becomes log base 3 of 27. If you noticed, your calculator has a log button! If you plug it in correctly*, you will get the answer of 3. The log base 3 of 27 is, in actuality, asking what power when applied to 3 will produce 27? We just change the question to: log base 3 of 27? How to answer that question (which is what was asked before, just in a different way) is left as an exercise, but for now use your magic calculator.

Logs are awesome, and are insanely useful. Now can we please stop spreading the myth that rooting is the inverse of exponentiation? That's like saying subtracting three is the inverse of addition.

*To input into your calculator directly, you'll probably have to do log(27)/log(3). This is because the log() function of the calculator is likely in base 10, not in base 3, but log(x)/log(y) where the bases of the log() functions are the same is the same thing as log(x) with base y.


Posted on 2010-01-13 by Jach

Tags: math

Permalink: https://www.thejach.com/view/id/66

Trackback URL: https://www.thejach.com/view/2010/1/rooting_is_not_the_inverse_of_exponentiating

Back to the top

Anonymous February 23, 2013 03:21:16 PM It seems to me that exponentiation, by virtue of not being commutative, has two possible inverse approaches: undoing the exponent (roots) OR undoing the base (logarithms), depending on which argument you need to solve for.

When solving for a base, we need to undo the second argument, which we can do using roots (or roots expressed as a fractional exponent)... which is very similar to how we undo addition or multiplication.

When solving for the exponent, we need to undo the first argument, which we can do using logarithms.

Things are not symmetric here, unlike with addition and multiplication, because exponentiation is not commutative.

I am still wrestling with how to best explain exponentiation to high school students in a conceptual manner... it's complicated.

Whit
http://mathmaine.wordpress.com
Jach February 23, 2013 09:12:11 PM Thanks for the comment. I agree with not being too confident about how to best explain the concepts fresh... Though I still think that exp and log as inverse operations is a better concept, rather than exp and exp-to-a-fractional or sometimes one and sometimes the other.

Another reason I like this more is that when solving a root problem like $$x^n = 27$$, there are $$n$$ answers. In the complex plane, you get all of them, in the real plane you might miss some. Solving the other problem like $$n^x = 27$$ however results in a single answer in the real plane, which fits the idea of an inverse much better to me. In the complex plane it's a $$k 2\pi$$ periodic answer that maps to a vertical line through the point on the real axis. Both underscore the relationship between exponents and the complex plane, which is so important in engineering applications and ought to be shown much earlier than college...

One approach in the teaching that I wonder whether it might be better or worse is getting rid of altogether the notion of "the square root", "the cubic root", etc. as operations and expressing them only as exponentiating to fractional powers that clearly undo the non-fractional power via multiplication. Algebraic problems then become a question of "do I need to do addition/subtraction to both sides, or do multiplication/division to both sides, or do sine/arcsine to both sides, or do exponentiating/logging to both sides?"

Is the fact that exponentiation isn't commutative such a big problem with other non-commutative things? Division, subtraction, and matrix multiplication are all non-commutative, but it seems like students are okay with being able to find inverses for these things while watching out for the special cases.
Anonymous February 24, 2013 05:57:17 AM Well, we basically train students to always rewrite subtraction and division as addition and multiplication respectively, so that they can regain the commutative and associative properties when manipulating an expression algebraically, so yes I see non-commutativity as a big deal.

I guess I need to go back and re-read my complex numbers text from college... Subtraction forced us to move from the counting numbers to the set of integers. Division forced us to expand our universe from the integers to the rational numbers, and roots (more so than logs in high school) force us to expand our numerical universe to include imaginary numbers. So at each step, it is the inverse function that has introduced us to new types of numbers, which have turned out to be very useful the more we worked with them...

Perhaps complex numbers are at the heart of exponentiation, and we should introduce them before attempting to get at inverse concepts with exponentiation... However I suspect that lack of commutativity will still "break the pattern" of simplicity that we became used to with the arithmetic operations.
AnonyGauss September 24, 2013 03:24:28 AM Interesting article, very clear explanation of the concept!

I really had to think some time about Anonymous´ first comment which stated:
"It seems to me that exponentiation [...] has two possible inverse approaches: undoing the exponent (roots) OR undoing the base (logarithms) [...]."

Would you say this is true? Or is there more to this? It seems to me like a reasonable explanation, but until today I always thought that both the root and the logarithm were the inverse of exponentiation. Hope you can, or know a link that, explain(s) this!
Jach September 24, 2013 01:46:44 PM AnonyGauss, Whit (Anonymous 1 and 2) has some articles on his blog you might like to read. To specifically answer your question though, I'm not quite convinced that just by being non-commutative then exponentiation should have two very different inverse functions. Matrix multiplication is also non-commutative: $$AB = C \neq D = BA$$; but there's only one inverse operation, and that's finding the inverse of a matrix if it exists and then multiplying.

Another example might be string concatenation: 'T' + 'HAT' = 'THAT', but that's not the same as 'HAT' + 'T' = 'HATT'. Since in general $$String1 + String2 = String12 \neq String21 = String2 + String1$$, does that imply we need two inverse functions? I don't think so, I think there's just one inverse function: "remove letter(s) from either end if possible." So 'THAT' - 'T' is either 'HAT' (removing from the front, which is right if we did 'T' + 'HAT') or 'THA' (removing from the end, which is right if we did 'THA' + 'T'.) However 'HATT' - 'T' can only be 'HAT'. 'HATT' - 'HAT' can only be 'T'. And 'HATT' - 'A' is undefined. Sure there are multiple values the inverse function can produce sometimes depending on the arguments, but this isn't really that different from $$3/0$$ producing plus and minus infinity and $$0/0$$ being indeterminate...

So I'm still hesitant to accept rooting as an inverse function... After thinking about it some more, part of what bothers me is that it's not a necessary "unique" operation in light of the reduction of rooting to just being more exponentiation. If we use the idea that inverting addition brought us negative numbers, etc., then I think it's clear that the operator comes first, the new number class comes second, at least until you get to the rationals.

Suppose we are inventing math and start with the positive integers, and after just inventing the addition function to add two positive integers we ask "given $$c = a+b$$, if we know just $$a$$ how can we invert $$c$$ to recover $$b$$?" We need to invent a new operator, $$-$$, and define it as the reverse process of $$+$$. Instead of counting up, we count down. We come to a problem though when we try to count down below 0, so we invent a new class of numbers, namely the negative integers. We discover that with this new class of numbers, we can just use addition for everything if we want, so long as we redefine addition to handle this new class of numbers. (But that redefinition will still contain the mechanical parts of the subtraction function.)

Now we have positive and negative integers (and of course 0 which is typically lumped in with the positives...), plus and minus functions, and we invent a new operator $$*$$ (that's multiplication, not convolution), which for now we define in terms of iterated addition over integers with special rules for negative signs... Given $$c = a*b$$, how do we recover $$b$$ if we only know $$c$$ and $$a$$? Again we have to invent a new operator, $$\div$$. We define it in terms of repeated subtraction, dishing out number portions of equal size. We come up against new problems (like what it means to divide by 0, or trying to divide 5 wholes into 2 equal chunks) so we invent a new class of numbers, namely the rationals (fractions, decimals). We also see that with rational numbers we can do everything in terms of multiplication if we want, so long as we redefine multiplication to work with rational multiplicands... (And that definition would contain the mechanical pieces of division.)

Note that right now, we have enough mathematical sophistication to define pi as any circle's circumference divided by that circle's diameter. This may be a hint to the existence of irrational numbers. If you come up with the idea of infinite series, you could even define pi that way. If history had happened differently... But I think it took until the 18th century to actually prove pi irrational. So let's continue on with inventing new operators.

Next we invent exponentiation, first putting it in terms of iterated multiplication with an integer exponent, not remembering the redefinition trouble that caused last time... And we find we may have need for two inverses this time, because exponentiation isn't commutative. So for $$c = x^y$$ we need an inverse to recover $$x$$ given $$c$$ and $$y$$, and an inverse to recover $$y$$ given $$c$$ and $$x$$. $$c = x^3$$ vs. $$c = 3^y$$.

Recovering $$x$$ from $$c$$ and $$y$$ is easy if we allow any $$y$$ to be a rational number instead of just an integer, so inverting $$c = x^3$$ is just as easy as $$c^\frac{1}{3} = (x^3)^\frac{1}{3} = x$$. If you allow for negative numbers the rules are a bit trickier and you can sometimes wind up with two potential solutions, as in $$c = x^2$$ where since we only know $$c$$ and $$2$$, the original sign of $$x$$ has been lost and we are really left with $$+x$$ or $$-x$$ as solutions to $$c^\frac{1}{2}$$...

(Just after this, we also, this time following history more closely, discover the isosceles triangle identity of $$a^2 + b^2 = c^2$$, and we discover that $$2^\frac{1}{2}$$ cannot be a rational number... so we inadvertently were forced to create irrational numbers to invert $$c = x^y$$ knowing $$y$$. We might also stumble upon imaginary numbers when we try to compute $$(-2)^\frac{1}{2}$$, and create the complex number class to unite imaginary, irrational, and rational numbers.)

We're still stuck on $$c = x^y$$ knowing $$x$$, though. Even with the new number classes. For this, we really do need a new operator, and that's the logarithm. If we have $$c = 3^y$$, we can find $$y$$ by applying the logarithm with base 3 to $$c$$: $$log_3(c) = log_3(3^y) = y*log_3(3) = y$$.

(Since all this really puts a hamper into the "iterated multiplication" viewpoint, we might redefine the exponent operator $$a^b$$ on any complex $$a$$ and $$b$$ in terms of the general exponent $$e^z$$. (Like so.) And just like what happened when we redefined addition to take into account negative numbers, with the mechanics of subtraction hiding in the redefinition, this redefinition of exponentiation contains the mechanics of the natural log inside.)

What my argument all boils down to is simply that to undo exponents, you don't need a new root inverse function, just old rational exponents. But to undo bases, you do need a new logarithm inverse function.

Sometimes I feel like the log function is more of a transformation than an inverse. It was originally invented because it turns multiplication into addition, so if you're an employee working on filling number tables for an almanac that requires a lot of by-hand multiplications, you can work faster and more accurately if you have a pre-computed table of logs and use addition. The Laplace Transform is an actual transform, and it turns calculus into algebra. The Fourier Transform turns convolution into multiplication. In statistics they actually call it a log transform when they just take the log of all the data to better see relative differences.

I do see exponentiation, in its general form, as a huge departure from the familiar addition and multiplication, and not just because of not being commutative. Treating 'rooting' as an operation seems just too much like trying to stuff it back into a familiar setting.
Whit September 24, 2013 02:11:41 PM I certainly agree that "roots" are not a new function to undo exponents. Exponents can be "undone" with a reciprocal exponent just as multiplication can be "undone" by a reciprocal factor and addition can be "undone" by the addition of a negative. Using this argument, none of the operations of subtraction, division, and roots are needed... strictly speaking.

However subtraction, division, and roots can all offer a more convenient way of describing a situation linguistically: "take away three" seems more intuitive than "add a negative three".

I also, after reading Jach's examples above, agree that an operator's lack of a commutative property does not force a new function to be introduced.

Logarithms are a bird of a different feather, as Jach describes so well above, probably because exponentiation is so much more than repeated multiplication. We have physical representations for addition and multiplication, but our descriptions of exponentiation are all conceptual as far as I have seen.

I do not know enough about LaPlace and Fourier transformations to wrap them into my thoughts on the topic... perhaps I should do some more reading, as the transformation idea is an intriguing one.

Whit
http://mathmaine.wordpress.com
Jach November 18, 2013 10:07:27 PM Whit: Vi Hart (who uploads a variety of interesting math-related videos) recently just uploaded this one:



At around the five minute mark she describes how to think of exponentiation as "counting in a times n sort of way", then shows how roots specify a counting rule, then more interestingly she describes logarithms, e.g. $$log_b(Y)$$, as "how many steps to get to Y in a system that counts in a times b sort of way." It's a fascinating geometrical view using the number line that I haven't thought about before, though apparently Wolfram Alpha actually presents the number line view as well when you type in log(Y)/log(b).

I'm not sure what she's trying to say at the end that the logarithm is "a beast which at times transcends even the fanciest of counting". I guess with complex numbers I don't really see what it means to find the number of steps it takes to get, say, $$\pi+6.687i$$ in a system that counts in a times e sort of way... $$log_e(\pi+6.687i)$$ is easy to calculate from the rule that $$a + bi = (a^2+b^2)^{1/2} e^{arctan(b/a)i}$$ and the rule of logs of multiplications are sums of logs, and you can use this number line visualization when computing the real portion of the log, but I wonder if there's a good geometrical view that involves the 2D vector arrows that complex numbers represent...
Back to the first comment

Comment using the form below

(Only if you want to be notified of further responses, never displayed.)

Your Comment:

LaTeX allowed in comments, use $$\$\$...\$\$$$ to wrap inline and $$[math]...[/math]$$ to wrap blocks.