For example, 0**0 is generally defined to be 1 even though 0**X is 0 for all values of X > 0. Also, 0! (factorial) is defined to be 1. Because these "conventions allow us to extend definitions in different areas of mathematics that would otherwise require treating 0 as a special case."
Seems to me that this is why we don't allow 0/0 = 1, or in fact, n/0 ever. Because, it doesn't work without making a lot of special cases. If 0/0 = 1, then the proof I gave before would prove that 2=1.
Here's a definition that we'd have to apply a special case to if 0/0 = 1 were allowed:
0 * x = 0 (except when x = n/0)
If we simply say that n/0 is not a number, all these special cases go away.
I think the examples of continuous functions that seem to work when the denominator goes to 0 is conflating division by 0 with taking the limit as the denominator goes to 0. I'm not sure, as I'm no expert in math, but I believe that these are all examples of functions whose values are approximations represented by infinite series. In such cases, you would have to examine the infinite series to understand what's really going on. |