Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of machine learning uses approximations to calculus & continuous functions, and those approximations make heavy use of polynomials.

As for the definition of reals - most programmers have a basic knowledge of reals and functions already, would working from first principles be any use for programming?



Please read my argument: I'm not against the use of polynomials. I'm against using "real" numbers.

The "real" numbers taught in high-school or in college are, well, basically, a thinly veiled lie. They "work well" for students who substitute memorizing the page number of a proof of a theorem for actual understanding of a theorem, but they don't work well for mathematicians who would actually want a good theory justifying their existence.

Needless to say that nothing in computers work as a "real" number. Knowing this is important to understand that you work with, as you called them "approximations" and that those "approximations" will have pathological cases where the distinction will bite you.

Finally, it's completely unnecessary for the purpose the author is using them for to have "real" numbers. It works perfectly fine with a much simpler and straight-forward concept of rationals, which doesn't require any pretension and wink-wink fingers crossed explanations.

I specifically pointed this out because I remember how in my days of being a CS student the boneheaded practicum material made my blood boil because a professor would write nonsense like "let A[j] be an array of real numbers" in a... C program! And the same boneheaded professor, when told to correct that to "floating point" or "rational" would spit some more nonsense about "real" numbers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: