Lol at the proofs that are simply the words "trivial", "easy" etc. Why is it so hard to just write it out fully, in a completed paper -- it would barely add any length. I agree I may find them easy, but the attitude is harmful. Besides after doing any sort of functional programming the whole paper is obvious, so why even bother :). The same brain rot extends to more complicated situations.
I don't know about this particular paper, but there's a) usually a page limit when publishing and b) definitely a limit to a reader's patience. So filling a paper with trivial or technical proofs is not good form.
That being said, many scientific papers would benefit from being rewritten in a style that focuses on the reader and readability. That's only understandable: when writing for a journal or conference, there's a lot of time pressure and your audience are bespoke experts on the matter at hand. But there's only a slight chance your paper becomes important to a broader audience. I wonder if one could form a business of rewriting classic papers of CS and math.
For b) most people do not read a paper linearly, you look at maybe the intro, skim around a bit, and then maybe go to an appropriate section. If you are really invested then maybe you go through the whole thing. A graph would be a much better format. A bespoke expert with time pressure is not going to comb over the whole thing, they will pick out the bits they care about.
For the intended audience of a paper, many things are considered trivial or easy. The attitude may be harmful when teaching(if you’re unaware of what the students find trivial), but in professional communication it is a signal to the reader that they shouldn’t spend unnecessary time focusing on some (trivial or easy to verify) part of the argument.
It's unfortunate there is no an equality decision procedure. If two numbers are equal you will see that the difference is zero no matter how much precision you request, but of you always see zero difference you will never know if the numbers are equal or you just have to keep searching.
I know that such a decision procedure exists for algebraic number (i.e., you can allow roots of any orders and general polynomial root extraction), but I don't know about transcendental functions like exponential and logarithm. And frankly it smells a lot of undecidability. Computational things tend to become undecidable just a little bit after they become interesting, and sometimes even before.
There is an algorithm by Richardson to prove equality of real and complex numbers composed from exponentials and logarithms. (It doesn't have a complete proof of correctness, but it never returns a wrong result: it will loop forever iff it encounters a counterexample to Schanuel's conjecture.)
It can also be extended to higher transcendental functions, but it gets harder to guarantee termination.
I have a library for exact reals/complexes with an incomplete implementation of Richardson's algorithm: https://fredrikj.net/calcium/
I would just like to note that this is a moot point, as the computable reals are a subset of the reals, thus by extension equality of the reals is also undecidable.
This fact is always given as an argument against computable analysis, while completely ignoring the reals suffer from the exact same issue.
It bothers me greatly that the mathematical objects we generally refer to as "numbers" have no constructive basis. I really wish the reals were phased out as the default "number" object as a mathematical curiosity (like the surreals are), and that the default interpretation of "number" is replaced by a computational foundation. The idea of an "non-computable number" seems completely silly to me, as the fundamental property for something to be called a number, to me, is the ability to do arithmetic with it. If I can't add with it, multiply by it, or even know its digits, why are we calling it a number (e.g. https://en.wikipedia.org/wiki/Chaitin%27s_constant)?
Note that I have no problems with undecidability in general, I am well familiar with it and the above does not stem from ignorance. I am not rejecting the reals as a mathematical concept entirely, not at all. They are perfectly valid mathematical objects to study. I simply reject the current definition we have for a "number" as the correct mathematical object for the job.
But you can do arithmetic with uncomputable numbers. The probability of a the universal machine not halting on a random input is 1 - Ω.
There, I just subtracted an uncomputable number.
As you note, you cannot "know its digits", which is just rephrasing the "uncomputable" aspect of these numbers.
I disagree that you have done arithmetic here. You have constructed an expression that describes arithmetic.
Would you be satisfied if you entered 3 + 8 into a calculator it would simply give you back 3 + 8? Would you say this calculator has performed arithmetic? No. You described to it the arithmetic it must perform, and it failed to do so.
In mathematics we almost exclusively work with the descriptions (expressions), using the rules for simplifying and manipulating those descriptions (algebra). But the foundation underlying the algebra is (currently) the reals, which contains uncomputable objects.
Ultimately we want every description of a number to map to an actual number. Why are we allowing descriptions that are not computable, and then call the result a number anyway?
How about instead of number you say "computable number" and call it a day? There, solved it for you.
Oh no, I didn't, because the concept of computability is also not absolutely clear. Helpful reading: "The Nature of Mathematics", in "The Fabric of Reality" by David Deutsch.
One issue with computable numbers is that computable analysis is a far more complex subject than real or complex analysis since limits or upper bounds do not preserve computability. Similarly, the continuity of computable functions means that one need a more complex framework to even speak about discontinuity.
He dislikes the idea of presenting infinity as a complete thing. And that one objection leads to a lot of different new directions to pursue. His rational trigonometry does trigonometry entirely without transcendental functions. It can involve square roots at times, but it is kept to a minimum. Everything else is entirely rational based. He has several videos that delve into the problems of each of the standard formulations of real numbers. He also argues for a more practical and computational version of the Fundamental Theorem of Algebra.
An interesting demonstration of the difficulty of real number arithmetic, relevant to some other comments here, is multiplying 1/9 by itself. For fractions, it is trivial as it is 1/81 and this can be converted into a repeating decimal, of course. But try multiplying the decimal form of 1/9 by itself. It is all 1's in the multiplication so it should be easy, right? If you write it down, essentially, the n+1th place is generated by summing n 1s. That is, it is .0123456789(10)(11)(12).... where I put in parentheses the sum of that columns digits. So one has to carry and as it goes further out, one is carrying over many digits; when out a trillion places, we are carrying across 12 places, which is larger than the repeating pattern. Just carrying that first 10 leads to .01234567900(11)(12)... And .012345679 is the basic pattern of 1/81 but it is hard to see feeling confident about that if one only had the infinite decimal to work with. The point is that something with a non-repeating pattern such as computing sqrt(2)pie seems difficult enough that it verges on the vacuous. He does point out the difference for his criticism applying to Pure Mathematics rather than Applied Mathematics. Approximations are fine for applications and what Wildberger is really saying is he wants a Pure Mathematics that really supports that explicitly by focusing on rational numbers as much as possible.
For example, he introduces differential calculus with polynomials by considering transforming p(x) to q(x) = p(x+r), collecting powers of x, and then translating back to p(x) = q(x-r) which is just replacing x with x-r. If expanded out, all the r's cancel, but if one leaves them and then truncates the different powers, one gets the different polynomial approximations. While neat in avoiding limts, the real nice thing is applying this technique to algebriac curves. For example, we can view the unit circle as the solution to 0= p(x,y) = x^2 + y^2 - 1. We can do the same trick above computing p(x+r, y+s), expand, and then retranslate and truncate. This can give us the approximations to the unit circle at a given point on the circle. This sidesteps having to compute the derivative of the square root function to get the tangent lines to the unit circle.
An example of an alternate work flow is multiplying two complex numbers on the unit circle. The traditional approach is to say "compute the angles and then add the angles". But the computing of the angles is impossibly hard to do in a precise fashion (approximate is fine, of course). But there is a perfectly fine accurate procedure. Take the points z and w on the unit circle and draw a line through them. Draw a parallel line through 1. The line will intersect the circle at z*w. As a quick example of this, if you multiply a+bi and -a+bi, this becomes -a^2 -b^2 = -1. Geometrically, the line through these two points is horizontal and the horizontal line through 1 intersects at -1. You can see that with angles, but it feels less intuitive to me that that is how it will work out.
Even the set of Natural Numbers being called infinite is something he questions. He used the term "unending" which I like as well. And by understanding that "most" natural numbers cannot be represented in this universe (assuming it is a finite universe), then it leads to questions such as what numbers can be represented? We have islands of simplicity such as 10^10^10^10^10^10^10 + 23. How dense are they in the larger numbers? Can we do anything useful with those islands? These questions are less prompted when we simply think of the natural numbers as this one big set of sameness. But if we demand that being able to do the computations is actually an important requirement, then we can investigate many more interesting ideas. And Wildberger's point is that this should be in the domain of Pure Mathematics with it being taught to future mathematicians instead of it being relegated to Applied Mathematics.
I have seen some of Wildberger's work, yes. My stance isn't as strong as him on some points - I certainly don't reject the reals outright or take issue with them. I think they're fascinating mathematical objects, I just reject choosing this particular mathematical object as what we mean by a "number".
> the mathematical objects we generally refer to as "numbers" have no constructive basis.
Eh? I’m confused by your terminology here.
You can construct the set of Real numbers using Dedekind cuts. Less conventionally, you can construct the Surreal numbers, and find the Reals as a subset.
I should say computable basis to be precise. Our fundamental object of computation - the number - generally does not have a computable basis in modern mainstream mathematics (where we generally assume the real numbers by default). The standard Dedekind cut construction does not give computable arithmetic because it involves cartesian products of arbitrary (not necessarily enumerable) infinite sets.
I can understand some of that discomfort. However, ultimately, uncountable infinities and (the axiom of) choice leads to other interesting scenarios that, IMO, it's best to make peace with.
Besides, mathematicians rarely study arbitrary, probabilisticly-common, objects. We focus on objects with “nice” algebraic properties. Similarly, it doesn't matter that almost all numbers are uncomputable when we don't use them in practice.
Thank you, it’s not an area I’ve gone deep on. It seems to me the casual observer that there is not a single thing called “number”, rather there is a family of somewhat related concepts each of which ties in to various semantics and systems of units for relevant operations and interpretations. So “number” needs at least one adjective in front of it, otherwise we don’t know what we’re discussing.
This technique can't be applied to equality, since equality "collapses" the entire decimal expansion down to a single bit: we can't make closer and closer approximations, since that first bit requires an infinite amount of work.
In fact, talking about "Nth decimals" also requires an infinite amount of work. For example, what's the first digit of '0.99999... + 0.00000...'? Note that this is not the same as 'rounded to one significant digit'. Let's call the numbers x and y; there are a few possibilities:
- If y = 0, then the sum is equal to x:
- If x eventually has a non-9 digit, then the first digit is 0; e.g. if the third digit is a 3, then the result is 0.993...
- If x repeats 9 forever, the first digit can be either 1 or 0, depending on taste
- If y > 0, then the result depends on the relative magnitudes of y and (1 - x). The first digit could be 0 (if y < 1 - x), or 1 (if y > 1 - x). If these differences only occur after billions of decimal places, we must go that far through the calculation to determine whether the first digit is 0 or 1!
Most computable real representations will actually calculate to within some epsilon, e.g. a power of 10 if we're rounding the final decimal when displaying the result. That ensures all of these problems disappear, but of course it introduces uncertainty.
We can likewise ask whether two numbers are equal to some precision; but not necessarily in general.
(There might be tricks/identities for the particular functions being used in this calculator; but the above applies to "computable reals" in the strictest sense; i.e. where each function is a Universal Turing Machine)
No, this is wrong. Richardson's theorem is about functions, not constants. Equality of constants constructed from exponentials and logarithms is decidable (assuming Schanuel's conjecture) by another theorem (and algorithm!) of Richardson.
Even without complete decidability, a good CAS implementation can give you "yes/no/not sure" answers that are more than "good enough" for practical use.
Basically the article is just lazy evaluation like in Haskell working with infinite lists. Numbers like Pi, e, sqrt(2) are the programs that generate them. However we use syntactic sugar, or just a compact notation, to represent that program or algorithm. It’s useful to be able to glue all these calculations together, a bit like lazy evaluation, or working with generators. Some programs we have some guarantee, we can probe to arbitrary depths and will have a consistent result, eg Pi. However in general for two random real numbers it’s undecidable if they are even equivalent.
The linked paper by Boehm about implementation of the Android calculator (6.0+) with constructive reals was also interesting: <https://dl.acm.org/doi/10.1145/2911981>
It's a neat feature but the UX annoys me. For sqrt(2) it goes straight from 1.41421356 to "...41421356E-8". I feel like either it should just scroll the decimal places, or at least when it makes the jump from decimal to scientific it should show the "1" instead of the "..." so it's obvious what it's done.
> Mike Sebastian maintains a page of “calculator forensics”, recording the result of computing arcsin(arccos(arctan(tan(cos(sin(9)))))) on as many different models of calculator as he or his pals can get their hands on. The value should be exactly 9
That's mathematically not correct. To define the function arcsin you have to choose a codomain, which is typically the interval [-pi/2, pi/2]. So for any values x not in that interval you will have arcsin(sin(x)) != x.
Dynamic typing failures come up in math too. People love to ignore precisely where functions are defined, and then try to dynamically extend it elsewhere and are surprised when it spits out nonsense. Another example, saying a function has a non isolated singularity. It’s more like a codimension something region where you fail to glue together and get a coherent result.
I believe this is referring to the trigonometric functions as calculated using degrees, in which the codomain of arcsin is [-90°, 90°]. Of course, the expression still violates dimensional analysis.
I went to the "use my calculator yourself" page. The calculator looks nice but the interface is either awfull or I don't get it. I tried for 5 minutes to type in "1/5 + 1/5 + 1/5" and failed...
which is the same as 1. That number (written as 0,999... or 1) is NOT in the set.
Another way to convince yourself: take 1. Divide by 9, you get 0,111... . Now multiply it back by 9. You get 0,999..., but this must be the same as 9.
Also, 0,9999... is not a regular decimal number. It is a writing CONVENTION. This is in fact exactly the same as the fraction 9 / 9, which is exactly 1.
In standard mathematics there is no such thing as an infinitely small number. All nonzero standard real numbers have some finite separation from zero.
The difference between 0.9 recurring and 1 would have to be infinitely small, as you note. So if it's a number, it can only be zero. It can't be any nonzero number because none of them are infinitely small.
You may not like it intuitively, but this way of thinking is perfectly rigorous and logical and works very well for all practical calculations. Admitting infinitely small numbers makes math way more complicated (nonstandard analysis) and under the hood it ultimately falls back on the standard model anyway.
I wouldn’t say nonstandard analysis is way more complicated. It replaces the use of limits with a different (hyperreal) number system. That makes some things look simpler.
Which is exactly why nonstandard analysis is way more complicated: the usual construction of the hyperreals involves some somewhat technical set theory, and even a relatively fast and loose version is going to require at least a first course in abstract algebra to be intelligible.
The map is not the territory, and a way of notating a number is not the number itself.
Decimal expansions are convenient for most purposes, but they have this property that sometimes there are two different ways of notating the same number. This is unfortunate, but unless you can come up with a better notation (for Dedekind cuts over the rationals, or some other way of constructing the real numbers), we have to live with it.
(Recurring decimals are not "approximation". They're notation for specific individual real numbers)
0.999... is as much an approximation to 1 as 0.333... is to 1/3. That is, not at all, since we use these representations to stand in for the rational number that is their limit. You will find that exactly the numbers that can be represented exactly in finite digits (a.k.a. ending in 000...) have two representations.
Have you checked the wikipedia article on .99 recurring? This seems to be one of the first things taught where a personal who is reasonably mathematically savvy and has an intuitive grasp of math might actually trip up, so teaching this fact has become the subject of endless discussion. I don't think you should be called names for this, though -- you are wrong, but you are in good company as far as I've seen.
0.999 is just a geometric series of fractions: 9/10 + 9/100 + 9/1000 etc.
Do you remember how to calculate the sum of a convergent geometric series? The formula is a/(1-r), where a is the coefficient and r is the ratio. In this case, a = 9/10 and r = 1/10, so we have
a/(1-r)=
(9/10)/(1-(1/10))=
(9/10)/(9/10)=
1
If you don't believe that infinite series can be summed, then what is your solution to Zeno's Paradox?[0].
Also, conveniently enough, there's a pretty thorough Wikipedia article on the whole .999 question. [1]
Right - every time you add a '9', the difference gets smaller, but it's still there, it never completely goes away no matter how arbitrarily large number of times you repeat the process. However, most math doesn't treat '0.99 repeating' as an algorithm to approximate a number, but as an _actual number_. There is no 'adding another 9', all of the infinite number of 9's are added at the same time. It's (at least to me) very different from the intuitive meaning of '0.99...', but if you treat it as the mathematical object '0.99...', not as 'start with 0.99, and keep adding 9's as necessary to approximate', then 0.999999... does in fact equal 1.00000... because it's impossible to compute a number in between them. (edited to hopefully improve the explanation)
Even if you do have infinitesimals, no reasonable definition of 0.99 repeating is anything except 1.0:
Suppose you have a number space with
a) a sequence of approximations 0.9, 0.99, ... (not necessarily indexed by the naturals, just some infinite set that gets bigger)
b) 1.0
c) enough "niceties" like (a-b)!=0 for a!=b, addition existing, yada yada
Suppose that some "solution" exists in that number space for your favorite way of writing 0.99 repeating. One reasonable thing to expect from such a solution is that your sequence of approximations actually get closer to (or eventually equal) it.
That one constraint (that your better approximations are actually better approximations) forces any possible solution to be exactly equal to 1.0.
That doesn't necessarily mean the solution is 1.0, just that if it isn't (and is something infinitesimally different) then one of the assumptions must have been violated. Unfortunately, most of the assumptions were simply describing the problem, so the only real possibilities for failure (if we assume some solution exists) are the foundations of math being fundamentally flawed (it might happen, but that doesn't seem plausible here), or the number space we're considering being considerably less "nice" than the reals (i.e., not nice enough for most of modern physics and whatnot).
You could of course instead say that it's dumb that we can define 0.99 recurring to actually equal something, but that's just quibbling over the names and representations that we're' giving to things -- sequences of better approximations exist, 1.0 exists, and there exists some partial function turning sequences of approximations into their "limits", and calling this particular instance something other than 0.99 recurring doesn't really get rid of the fundamental idea.
All of that aside, you can of course get infinitesimal approximations in other spaces (like the hyperreals), and those infinitesimal approximations are better than any real approximations, but the _actual definition_ of 0.99 repeating isn't equal to any of those approximations -- they just progressively get closer to 1.0.
It’s widely accepted, with proofs and all, to be true in the mathematical system that we use for most purposes.
There may be an alternative system in which that’s not true but for it to be not true you’d have to square it with the consequences of losing some axioms to those aforementioned proofs. You could probably make a whole career in academia out of constructing that system.
Give it a shot and prove all of those career mathematicians with aggregate millennia of experience wrong; you won’t make millions but there might be some cool insights in such a system. Probably something in the realm of the reals vs the computable numbers, which even has applicability in e.g. what axioms we can provably losslessly rely on in FPUs (where we pretend we work on the reals or at least computables but don’t really)
> I have been called names because of this […] the world is dumb and I want off
I suspect these are correlated more strongly to each other than to your mathematical insights
I know this might be a somewhat long and tangential answer, but I wanted to give a different viewpoint, because this is such an unintuitive fact.
Firstly, let me just say, for how much time we spend learning about the number line and calculating with Real numbers, they aren't the cute and familiar number system you think they are. Beneath the facade of familiarity lie a whole bunch of technical constructs and counterintuitive facts which justify the suffering of many undergraduates taking their first course in Real analysis (and the existence of books such as "Counterexamples in Analysis").
Lets start off by trying to define the real number system so that we can agree what a real number actually is - it's not unreasonable to think that .999... might be a fundamentally different object than 1, perhaps belonging to a set of "approximate" numbers. Without discussing any of the technicalities, I think a reasonable first stab at a definition of the Reals between could be the set of all decimal numbers (e.g. 111... or 7.500...).
On to my main point, how should we define equality of two real numbers? The most naively appealing answer would be through equality of their decimal representations (i.e. two numbers are equal if and only if their decimal expansions are equal, and if the decimal representations are different then the real numbers are different). Under this viewpoint, each real number has a unique decimal expansion (since real numbers are in one to one correspondence with decimals, and different expansions mean different numbers), and .999... is not equal to 1 (or 1.000...) because their decimal representations are different.
However, since there are no "gaps" in the Real number line, there must be some other number between the two. What would the decimal representation of this number look like? If .999... and 1 are truly different, then the mean should lie strictly between the two.
(1 + .999...)/2 = .5 + .499... = .999..
Well that's frustrating. What about trying to "squeeze" another number between the two decimal digit by decimal digit? The first decimal digit of this number has to be 9 (since it is less than 1, but greater than .999...), and by the same reasoning so must the second digit, and on and on...
Perhaps 1 - .999... is an "infinitesimal" real number given by .000... infinitely repeating followed by a 1 at the end. If you believe this, let's try and write out the decimal expansion of this number digit by digit. Of course the first decimal digit is 0, along with the second, and the third and so on. From this information, the decimal expansion of this "infinitesimal" number is a string of 0's (i.e. 0). You might object that I'm not considering the 1 at the end: what you really meant was a sequence of numbers getting smaller and smaller i.e. whatever number the sequence .01, .001, .0001 and so on tends to. But if this sequence is to represent a real number, it must have a single unique decimal expansion at the end of the day, and there is no escaping that all of the digits must be 0.
At this point, it should start to be apparent that the idea that every real number has a decimal expansion, and that the expansion must be unique (i.e. Real numbers with different decimal expansions are different numbers) are in conflict with each other. But why should we define equality through representation? After all, 1/3 and .333... infinitely repeating define the same number, but have quite different representations.
I hate to be reductive, but for whatever reason mathematicians have decided that the benefits of allowing the real number system outweigh the drawbacks of allowing non-unique representations of these numbers. In fact, one sees this idea repeated over and over throughout mathematics e.g. completeness of Lp spaces outweighs the drawback of generalizing measure and defining Lp functions as equivalence classes equal a.e., expanding the definition of derivatives to allow for weak differentiability permits a wider class of solutions to PDEs (such as shocks) and so on. There's always some sort of trade-off to be made between nice behavior and power + generality, and while it is often painful to adapt to, it pays off in dividends to go beyond one's intuition in mathematics.
Agreed this isn’t airtight, but if you accept the premise that real numbers are specified by their sequence decimal digits: .499…+.499… has 9 as its first digit after the decimal, 9 as its second and so on - the only other number that has this decimal expansion is .999… infinitely repeating, so these must be the same numbers.