Hacker Newsnew | past | comments | ask | show | jobs | submit | antonyme's commentslogin

Great article, and terrible idea. Wonder why the UI guidelines were bent here.

It's astonishing how many web designers have no idea about the difference between inclusive and mutually exclusive options.

I see checkboxes used all the time when they should be radio buttons. It's not rocket surgery. https://www.nngroup.com/articles/checkboxes-vs-radio-buttons...


All in the name of simplicity. Eventually Apple devices will just have one button that says "give us money".



Pressing a button would be optional. No need to give Apple's customers a confusing choice - just transfer all their money to Apple who can prioritise Apple's needs appropriately.


the technology is already there[0].

we're just a few reforms[1] away from having that commercialised.

[0] https://pubmed.ncbi.nlm.nih.gov/3492699/

[1] https://reason.com/2021/04/03/abolish-the-fda/


That's called a poker machine.


I think they're smart enough to stop at: "press this and we'll take your money and tell your friends you're cool."


> Rocket surgery I'm gonna use this as the superlative of rocket science


I suspect that phrase may be a oblique reference to the absolutely lovely UI testing book "Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems" by Steve Krug.


We've been using "rocket surgery" as a portmanteau of "rocket science" and "brain surgery" for decades, though.


The use of that mixture of rocket science and brain surgery predates that book from 2009 by at least 15 years.


The first time I heard it was on "trailer park boys" as a "Rickyism" so I suspect it would sneak into more people's vocabulary that way. No idea which came first.


Pair Programming is a complete crock of shit. If you hire someone to do a job, either they can do their job, or they cannot. If you cannot trust someone to perform and deliver work up to a certain standard, do not hire them. Is there any other industry where you hire two people to do one job?!

Now there are certainly times when it is very valuable to do collaborate, such as rubber duck debugging https://en.wikipedia.org/wiki/Rubber_duck_debugging or mentoring or simply doing a 'desk check'. But these are the exceptions, not the rule. Having someone breathing down your neck or be a back seat driver all day is at best a waste of productivity, forcing the driver to think at the pace of the observer, and at worst, a complete waste of time and effort.


How can you write an article about this without mentioning that you should use a Decimal type for working with currency calculations?

- https://dzone.com/articles/never-use-float-and-double-for-mo... - https://husobee.github.io/money/float/2016/09/23/never-use-f... - https://husobee.github.io/money/float/2016/09/23/never-use-f...


Sorry, but I remain skeptical. The same people who use `float` for financial calculations are probably the same people who don't understand how/why to do the rounding described to avoid these problems. Way too many programmers think that they are working in base 10 when using float. Why not just make them use it, and keep them out of trouble.

Also, I wonder what the overhead of rounding every operation is? Comparable to the cost of using a proper Decimal class?

"if you are doing some financial math that does not need to be accurate to the penny, just use floating point numbers."

I would argue that financial math by definition needs to be accurate to the penny. Where is "pretty close" financial calculations considered acceptable? Having worked at a bank, I know how seriously this sort of thing is taken.

From experience, working in scientific applications and numerical computing, summing large numbers of floats is fraught with accuracy problems too.


> I wonder what the overhead of rounding every operation is? Comparable to the cost of using a proper Decimal class?

For what it’s worth, for the major ICs (Intel, NVIDIA, etc.) there is zero extra overhead. A choice of rounding modes is part of the floating point operation’s instruction. And keep in mind that a floating point op is always rounding no matter what you do, the question is whether it’s always using the same rounding strategy consistently, whether you can control it, and whether it has what you need.

> I remain skeptical

You are correct.

There are good reasons not to use float for money that the article didn’t discuss, and perhaps the author isn’t aware of. You run out of integer precision at 2^24, which is only 16 million. If you process a 20 million dollar payment in units of dollars, you might be off by at least a dollar. That error will multiply with every floating point op done on the result. If your units are pennies, the largest safe value is only 160,000 dollars. If you ever subtract floats, like say make a payment or withdrawal, you can run into catastrophic cancellation without knowing it. Deposit $200k and then withdraw $199k, suddenly you have a small balance with large error that could remain in your account and continue to grow until the balance is zero. https://pharr.org/matt/blog/2019/11/03/difference-of-floats....


It has great benefits to the user, though. Once you have $20 million in the bank, you can continue to withdraw $1 at a time as often as you like without spending any of your principal!


By benefits to the user which would mean the bank made a mistake and now the IT department has a production P0 issue that will probably have everyone working to fix that yesterday then sure.


> You run out of integer precision at 2^24, which is only 16 million

To be fair, the author seems to be suggesting using double-precision floating point. If you use integer numbers of pennies, signed 32-bit integers cap out at a similar value of $21M. 32 bits is just too small for financial calculations.


You’re absolutely right; he did say 64 bit. I just wouldn’t do that blindly either, and the author admitted to not being fluent in error analysis. The issue with even doubles is that the magnitude of your error in a running total calculation is a sum of all the errors of your largest intermediate results (the results of multiplies you don’t see or store explicitly). That means with a bank account, the error of your calculations continues to grow forever unless you are explicitly correcting the errors. Rounding does not solve that, so even using doubles for money is a sketchy proposition unless you really know what you’re doing.


Isn't the proposal in the article is "round after every operation"? Since rounding to nearest cent values corrects each subtotal's error to zero, this should work up to 2^53 cents. ($90T)


Yes, that is the proposal. Rounding doesn’t correct errors though, it can make the error grow faster. Rounding just keeps you from dealing with sub-pennies.


> I would argue that financial math by definition needs to be accurate to the penny. Where is "pretty close" financial calculations considered acceptable?

There is a difference between analysis and accounting. There are many financial models (e.g. Black-Scholes-Merton option pricing) that are analytic in nature and use transcendental functions, so the idea of getting an exact, arbitrary-precision answer is hopeless. Using a decimal type for this kind of computation would be an exercise in slowing down processing by a few orders of magnitude.


This is an excellent point.

When computing around money, you're either working with magnitudes or units.

If you don't know which, you're working with units; use a Decimal.


If you are working with units, ordinary binary integers are overwhelmingly better for almost every operation.


> I would argue that financial math by definition needs to be accurate to the penny.

It sometimes needs to be accurate beyond the penny; pennies may be the smallest unit of settlement, but they aren't the smallest tracked quantity in all financial matters.


This thread seems to be unaware of DEC64, so this seems as good a place as any to point to this: http://www.dec64.com/

The gist: It's efficient (adds and mults in a few instructions), and preserves the decimal representation.

It's quite simple, really: Store a whole-number integer with a smaller one representing the number of shifts to the decimal point. This is probably a good choice for sensitive financial calculations.


> I would argue that financial math by definition needs to be accurate to the penny. Where is "pretty close" financial calculations considered acceptable?

Computing tips, from the standpoint of a customer paying the tip.

Saying "pay 20%" isn't precise to begin with, in that you will not use fractional cents to make it precise, so having a tipping granularity a bit larger than a penny is also acceptable, particularly in jurisdictions where there is no one-cent coin (or equivalent) in any event.


> I would argue that financial math by definition needs to be accurate to the penny.

Yes, indeed. There is no such thing as "financial math that does not need to be accurate to the penny". I wonder if OP has ever worked with a bookkeeper...


Bookkeeping needs to be accurate to the penny. Estimating growth for the next quarter or last years GDP doesn't.


Yes, but why switch systems though ?


How are you going to do linear interpolation, let alone exponential growth, without floating-point? On August 1 you made $1000, on August 7 you made $1100, assuming that growth is linear, how much are you going to make on August 20?

If you have some routines for dividing fixed-point numbers, one, why do you believe they have more accuracy than floating point (especially if you're doing divisions by numbers that aren't divisible by 10, as in the example above - don't you have the same problem as with floating point?), and why do you believe they're more correct than floating point? Did you write a test suite? Do you know what needs to be tested? What prevented you from writing the test suite for the floating-point calculations you were originally doing to do?


> How are you going to do linear interpolation, let alone exponential growth, without floating-point?

Fixed point is one answer, but I see you know that already. I don’t know what the banks use for interest, but I can guarantee that it’s not float32.

> If you have some routines for dividing fixed-point numbers, one, why do you believe they have more accuracy than floating point

Fixed point routines do not have more best-case accuracy than float, given the same number of bits ... but float32 definitely has a worst-case accuracy that is very very bad compared to a fixed point number.

> why do you believe they’re more correct than floating point?

Can’t speak for the GP, but I think asking about correctness is a straw man. The issue is really about safety, predictability, and controllability. Floating point can be very accurate, but guaranteeing that accuracy is notoriously difficult, and it depends very much on the unknown ranges of your intermediate calculations. Fixed point, on the other hand, never changes accuracy as you go, so you don’t get surprises.


> I think asking about correctness is a straw man. The issue is really about safety, predictability, and controllability.

Uh, and compliance? (if you didn't mean to imply that under "controllability")


Yeah, definitely. My list certainly isn’t exhaustive; just trying to express that use of floating point isn’t only a matter of correctness, there are lots of other factors. The dev time cost of using floats correctly for financial calculations is higher than the dev time cost of using ints or fixed point numbers.


That's not strictly true. Some of the work I do involves software to do premium calculations for insurance. While pennies do matter for intermediate values during calculation, virtually everyone rounds the final premium to a dollar amount or the closest 10 cent increment. Nobody cares about pennies when each transaction is hundreds or thousands of dollars.


Rounding at the last step is very different from rounding at each step.


It's hard to consider float anything, but a hack - and certainly something to keep away from discrete domains (countables).

I'm a little more curious on the suitability of using integers for money though (integer number of pennies). I suppose there are cases (and rules) concerning fractional pennies? Like in the case of percents of interests, often given by the year, accumulated[1] by the month?

[1] that's probably not the correct term in English.


> It's hard to consider float anything, but a hack

GPU engineers might have something to say about that. Depending on the context, trading off on precision can be a valid engineering choice.


In some countries, cents are a very insignificant number(they have the additional three zeros for whole numbers) and businesses are allow to round to the nearest hundredth, most small to medium businesses round to an integer.


The core is simple, small, cheap or even free, requires few resources, has plenty of tool support, is well-understood and well-documented, and is easy to debug and deploy. The 8051 is perfectly sufficient for many simple embedded applications that only require an 8-bit micro.

It's the instruction set that has been retained, not the silicon design. The variants these days are more power-efficient and powerful in terms of MIPS and peripherals, and have indeed benefited from years of R&D.


But is a ISA that wasn’t designed for embedded really that well suited for it?

And if the silicon design is new, we are not benefiting all the much from decades of battle testing, right?

I can’t imagine how a clean, embedded first 32bit ISA design wouldn’t be more appropriate.


> But is a ISA that wasn’t designed for embedded really that well suited for it?

But 8051 was designed for embedded:

> The Intel MCS-51 (commonly termed 8051) is a single chip microcontroller (MCU) series developed by Intel in 1980 for use in embedded systems

(wikipedia)

> I can’t imagine how a clean, embedded first 32bit ISA design wouldn’t be more appropriate

I guess we'll see how riscv will develop.


Ah, I didn’t know that. I thought it was a repurposed chip. That makes sense. Thanks for the clarification.


The 8051 was always designed for "embedded". It's a microcontroller. The ISA is nothing like the 8080's.


Yes but it's a bitch to program, multiple memory hierarchies and address spaces (at least 3), only one index register (hard to move stuff), and enough variants that "8051" is more of a species definition than of a particular architecture

(disclaimer: I sell an 8051 based product, have sold them in the past - never again)


It was designed for embedded use.


What a damn shame. I use git when I have to, hg when I get to choose. The usability and ergonomics are just so much better. I've never had any performance complaints. I chose Bitbucket over Github specifically because of Mercurial support. Unfortunately it's a business decision and they had to make a tough call.


Very few developers truly understand floating point representation. Most think of it as base-10, and put in horrific kludges and workarounds when they discover it doesn't work as they (wrongly) expected. I shudder to think how many e-commerce sites use `float` for financial transactions!

So as far as I'm concerned, whatever performance cost these alternate methods may have, it would be well worth it to avoid the pitfalls of IEEE floats. Intel chips have had BCD support in machine code; I'm surprised nobody has made a decent fixed point lib that is widely used already.


Replacing all the IEEE 754 hardware with posits won't fix this, though.

If you don't care about performance, then the actual solution has no dependency on hardware:

1. Replace the default format for numbers with a decimal point in suitably high level languages with a infinite precision format.

2. Teach people using other languages about floating point and how they may want to use integers instead.

The end. No multi-generation hardware transition required.

IMO, IEEE 754 is an exceptionally good format. It has real problems, but they aren't widely known to people unfamiliar with floats (e.g. 1.0 + 2.0 != 3.0 isn't one of them).


> 1. Replace the default format for numbers with a decimal point in suitably high level languages with a infinite precision format.

Unlimited precision in any radix point based format does not solve representation error. If you don't understand why:

  - How many decimal places does it take to represent 1/3 (infinite, AKA out of memory)

  - Now how many places does it take to represent 1/3 in base3? (1)
If you are truly only working with rational numbers and only using the four basic arithmetic operations, then only a variable precision fractional representation (i.e a numerator and demonstrator, which is indifferent to underlying base) will be able to store any number without error (if it fits in memory). Of course if you are using transcendental functions or want to use irrational numbers e.g PI then by definition there is no numerical solution to avoid error in any finite system.


I'm pretty sure that 1.0 + 2.0 == 3.0 in IEEE 754. :) Now, 0.1 ...


For reference to what they're talking about, the helpful https://0.30000000000000004.com/


One of the great qualities of IEEE 754 is its ability to represent many integers and operate on them without rounding errors.


Yep, sorry, meant to put 0.1 ...

Thanks!


They are not the same thing, but they are close enough most of the time.

The issue with floating point arrises when comparing very close numbers.


Great points. I agree - the performance hit is simply the cost of being accurate and having predictable behaviour.

I'm not suggesting we replace all our current HW with chips that implement posits (let's fix branch prediction first!!). More that FP should be opt-in for most HLLs.


In the paper "Do Developers Understand IEEE Floating Point" the authors surveys phd student and faculty members from computer science and found out that your observation is true: most people don't know how fp works.

They have the survey online at [1] in case you want to see how much you know about fp behavior.

[1] http://presciencelab.org/float


That was a waste of time: 6 pages of questions, and instead of "grading" it and letting me know where I stand on FP, it says "Thanks for the gift of your time".


Likewise!

FYI the actual paper is https://ieeexplore.ieee.org/document/8425212 and you can get the PDF here: http://pdinda.org/Papers/ipdps18.pdf


Interestingly, they got one of their own answers wrong.

The question is:

    if a and b are numbers, it is always the case that (a + b) == (b + a)
and their notes on it:

    Is a simple statement involving the commutativity over addition true for floating point? Generally, floating point arithmetic follows the same commutativity laws as real number arithmetic.
They make it clear in their notes that "are numbers" includes infinities but not NaNs. Now consider the case where a = inf and b = -inf. Then inf + (-inf) is NaN, and (-inf) + inf is NaN, and NaN != NaN.

    >>> a = float('inf')
    >>> b = float('-inf')
    >>> a + b == b + a
    False


Nice catch.

> They make it clear in their notes that "are numbers" includes infinities but not NaNs.

This is definitely not very clear on the form, thought.


Sorry for tricking you into giving real data for the researchers LOL.

Another comment pointed out to the paper with the answers.

A great resource to learn about the FP corner cases is the great Random ASCII blog:

https://randomascii.wordpress.com/category/floating-point/


Considering that back in the 1980's we figured out that you should use some kind of 'integer' or 'bcd' based type for financial calculations; it is astonishing that by almost 2020 people are making these same mistakes over and over again.

A 64-bit integer is big enough to express the US National Debt in Argentine Pesos.


> it is astonishing that by almost 2020 people are making these same mistakes over and over again.

It's actually logical: the number of developers doubles roughly every 5 years. It means that half of the developers have less than 5 years of experience. If they don't teach you this in school (university), you will have to learn from someone who knows, but chances are the other developers are as clueless as you.


OMG! The Eternal Eternal September.


It's not enough to represent one US cent in original Zimbabwean dollars though as of 2009. Although those 'first' dollars hadn't been legal for quite some time, the currency having gone through three rounds of massive devaluation and collapse, totalling something like 1x10^30 by then.


> Most think of it as base-10 [...] I'm surprised nobody has made a decent fixed point lib that is widely used already.

Note that fixed radix point does not solve the common issues with representing rational base 10 fractions. A base10 fixed radix solution would, but so would IEEE754's decimal64 spec, which would eliminate representation error when working exclusively in the context of base10 e.g finance, but these are not found in common hardware and do not help reduce propagation of error due to compounding with limited precision in any base.


The use of binary in the numerator is not the problem with using floats for financial math, it is the use of powers of 2 in the denominator.

The numerator is just an integer and integers are just integers and the base doesn't matter. But if the exponent is base 2, then you can have 1/2, 1/4, 1/8 on the base but not 1/5 or 1/10.


Rational numbers are a good way to do many financial calculations (extra points for doing something useful with negative denominators!), since many financial calculations are specified with particular denominators (per cent, per mille, basis points; halves, sixths, twelfths, twenty-sixths, fifty-seconds of a basis point, etc.).

However, as soon as you start doing anything interesting, you have limited precision as a matter of course.


If there's one thing I've always really appreciated about Groovy, it's that it used BigDecimal as the default for fractions, because 9 times out of 10, you need accuracy more than you need high performance and large exponents (and if you do need high performance, you wouldn't be using Groovy anyway).

Sadly most languages don't support something like that out of the box.


> Very few developers truly understand floating point representation.

Where would one go to better understand how floating points are represented?


“What every computer scientist should know about floating point” by David Goldberg. Readily available free online.


Thanks!


Sorry, but you actually sounds like one those people who dosen't really know how FP work.

> I shudder to think how many e-commerce sites use `float` for financial transactions!

The float IEEE-754 represent up to 9 decimal digits (or 23 binary digits) with precision. The double, represent 17 decimal digits. The error upper bound is (0.00000000000000001)/2 per operation. Likely irrelevant for most e-commerce.

Also, the database stores in currency values using fixed point.

> Intel chips have had BCD support in machine code

BCD is floating point encoding not fixed point. AFAIK, only Intel supports it and very precariously.

> I'm surprised nobody has made a decent fixed point lib that is widely used already.

Nonsense. If you do any scientific computation you have likely have Boost, GMP, MPFR installed in your system. They support arbitrary precision arithmetic with integer (aka fixed point), rational and floating point.


> Sorry, but you actually sounds like one those people who dosen't really know how FP work.

LOL, sure ok. Worked on banking systems for 2 years and been doing scientific computing for many more. Pretty comfortable with fixed and floating point.

> [error bounds] Likely irrelevant for most e-commerce.

Those bounds are theoretical, and there are plenty of occasions I have come across in the past when rounding errors were observed. It was forbidden in the bank to use floating point! We went to enormous lengths to ensure numerical accuracy and stability across systems.

I think this article has a pretty good explanation:

https://dzone.com/articles/never-use-float-and-double-for-mo...

> Nonsense. If you do any scientific computation you have likely have Boost, GMP, MPFR installed in your system. They support arbitrary precision arithmetic with integer (aka fixed point), rational and floating point.

Yes, absolutely right; I have used several of those 3rd party libs myself, as well as hand-rolling fixed point code (esp for embedded systems). I didn't write what I intended. I meant to say that very few languages have first-class fixed point in their standard library. So long as the simple `float` is available as a POD, people will (mis-)use it.

I think in a general purpose HLL, a fixed decimal type should be the default, and you should have to opt in to IEEE-754 floating point.


First, I would like to apologize for my attitude on the original post, my choice of words was very inappropriate for this forum. Second. I would like to thank you for bringing another point of view.

I'm talking about the average joe e-commerce. The precision requirement of Financial institution and large e-commerce are more strict than most website. For the average e-commerce the extra cost of using a decimal arithmetic framework might not be reasonable. Of course, software engineers should use currency types when they are available.

> https://dzone.com/articles/never-use-float-and-double-for-mo...

The example used by the website is misleading. They print a value with all the significant digits. In reality, e-commerces are only concerned with two digits after the decimal point. I run the same program in C with one billion (-1) iterations and the printed value with two decimal digits was exact. It was the the as it would be if I used decimal arithmetic.

> I think in a general purpose HLL, a fixed decimal type should be the default, and you should have to opt in to IEEE-754 floating point.

Most modern languages have some support to rational, fixed point and currency arithmetic. Nowadays, even GCC's C supports Fixed Point arithmetic [1].

The hardware implementation of floating point arithmetic is concerned about efficiency hardware and support for a lot of uses cases. The ranges and accuracy needed in different application varies widely. The BCD encoding is very inefficient hardware-wise compared to binary encoding.

In summary, I do agree people with working currencies should use the proper currency type, but I also think using IEEE 754 is usually fine, specially in the front-end. Also, I don't think the trade offs of changing from binary arithmetic to decimal arithmetic are not worth it for most people and hardware system.

[1] https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gcc/Fixed-Point.htm...


People still use Perl?


IMHO SCons had some nice ideas but it falls down in practice in non-trivial projects. Wrestling with eels is easier than doing cross-platform builds in SCons. And while I do love Python, I think using a fully-fledged language is a mistake.

I think a declarative DSL with scripting hooks behind it is a better model. Having used many, many build systems over the years, I've never found one I thought truly got things right (though several came close).

I actually think the ideas behind bjam ought to be revisited. It had a declarative style, with scripting to support it. The implementation was awful sadly; the tiniest mistake or typo would send the parser into a tailspin and the documentation was truly confusing. Any errors were presented to the user at the scripting level, which was even more confusing. But the idea of having toolchains defined separately, and having traits and features was brilliant. There's a lot to learn from there.


"Yet another 'academic' claims to have cracked part of the Voynich code but doesn't realise how much they still don't understand"

Fixed that for you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: