Hacker Newsnew | past | comments | ask | show | jobs | submit | spoj's commentslogin

A lot of valid concerns against arb code execution in prod for security, performance, auditability etc.

However, I do resonate somewhat with the post if I think about some accounting processes.

Accounting is where I came from, and a lot of data processing we do is mostly determinstic, with some "smartness" or judgement sprinkled in. Take for example bank reconciliation, the basic process is to match bank statement lines with accounting entry lines. In practice, dates, descriptions, and amounts often mismatch between the 2 for various reasons (typos, grouped bookings, value date vs transaction date differences, truncated values). This impacts a lot of SME's and these basic accounting processes are still manual as you need eyeballing. You look at a typical back office excel spreadsheet and will understand this.

You can pre-program the matching rules up to a certain point until it becomes unmaintainable. Or you can use LLM to generate data-dependent matching logic on the fly. I think there is a space for the latter approach, if we keep the scope tight and well contained. As with all engineering, it's about the trade-offs.

Useful targets for LLM to generate can be subsets of sql statements (create views and selects) or pure functions (haskell?), where side effects are strictly limited and there is only data in - data out. I am toying with SQL idea myself (GH: https://github.com/spoj/taskgraph).


Coin flipping works only if the fails are roughly independent. More important is the complexity ceiling above which they fail all the time.


So my solution to non-binary failure states is

1. Generate a potential solution

2. If the solution is complex, chunk it up into logical parts

3. Vote on each chunk and select those with more than k votes

By doing this you can filter out outliers (not always desirable) and pull the signal out of the noise.


This seems to be a study about people's misperception of carbon footprint of people in different income cohorts.

I scanned the paper and didn't find how it controls for the participants' misperception of income inequality to begin with. For example if the survey participants underestimated income inequality between cohorts, that might be the reason they underestimate carbon inequality - they just imagined the top 1% as more similar to other people!


In fact, it is a quintet, not a quartet, that the author was referring to. And it was the one in C major, as made clear in the beginning.


Interesting that I repeatedly saw quartet and not quintet- probably because the assumption is so strong. He only wrote one of them, so the piece clarification was unnecessary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: