Hacker Newsnew | past | comments | ask | show | jobs | submit | iamlucaswolf's commentslogin

What exactly is the ask here? Do you want input for a programming model? Practical advise for building a compiler? Do you want to target real hardware or a software simulator?

> I'd settle for a good way to express said tree in a plain text file

What you are referring to is commonly called an IR (intermediate representation). Compilers typically take the AST generated by the parser and translate (“lower”) it into increasingly hardware-near and optimized IRs before performing instruction selection.

I briefly browsed through the links you provided. IIUC, BitGrid is essentially one of the Turing-complete 2-D cellular automata. This Reddit thread might be interesting to you [0].

[0]: https://www.reddit.com/r/computerscience/s/IE5WRIrKsR


>What exactly is the ask here?

I wasn't sure, just a kick in the pants to get me out of the rut I was stuck in.

>Do you want input for a programming model?

Any suggestions are great

>Practical advise for building a compiler?

Yes, or even hints as to the right direction.

>Do you want to target real hardware or a software simulator?

I was just targeting real hardware, and started learning Verilog using YOSYS. Then it dawned on me that if you have a compiler, and you can break code into a huge directed acyclic graph of operations, you can spilt it up into chucks rather trivially. The compiler becomes a universal solvent for code.

If you have BitGrid chips, then can execute parts of the grid quickly in parallel (1 nanosecond or less cycles across the chip), if all you have is CPUs with a lot of RAM, you can execute parts of a huge grid very slowly (about 60 nanoSeconds/cell on my PC, single threaded, I should be able to get an 8x speedup on my 8 thread machine)

I'd like to eventually run GNU Radio flowgraphs at better than real time through a BitGrid. I should be able to do it with audio now, for small graphs, using my simulator.

I'd also like to figure out how to make a backend for GeoHot's TinyGrad that outputs a BitGrid graph.

>[0]: https://www.reddit.com/r/computerscience/s/IE5WRIrKsR

Oh my... the x86 Move instruction really has a lot of power built into it. [1] I'll be digging through the GitHub repo he mentioned[2] for a while, it looks quite interesting.

Thanks for your help!

[1] https://www.youtube.com/watch?v=R7EEoWg6Ekk

[2] https://github.com/reMath


I think this question mixes up two orthogonal dimensions: transactionality (i.e., OLTP vs. OLAP) and data model (relational, document-based, graph, vector, ...).

Regarding transactionality: There is an entire area of research on "hybrid transactional and analytical processing" (HTAP) systems that unifies OLAP and OLTP systems. Hyper [1] pioneered this path at TU Munich, it's successor Umbra [2] recently incorporated as CedarDB [3]. There are lots of others. Most of these systems, AFAIK, are relational.

Regarding data model: What we've seen in the past few decades is that non-relational DBMS (excluding key-value stores) only make sense in rare edge cases that require huge scale. There has, e.g. been research [4] that shows that graph databases are still, well, lacking, compared to relational systems. The common pattern seems to be: unless you need to service very specific workloads at huge scales, SQL is probably enough [5]. Then again, it really comes down to intrinsics. If you were to, for example, implement distributed locking using Postgres, you would likely run into problems with MVCC and Xids very quickly.

So, as you already mentioned, there is no silver bullet. But even today, unless you are Meta or Google, SQL is probably enough for a long time and lots of use cases.

(Full disclosure: I'm working on Hyper full-time).

[1]: https://hyper-db.de/ [2]: https://umbra-db.com/ [3]: https://cedardb.com/ [4]: https://homepages.cwi.nl/~boncz/edbt2022.pdf [5]: https://www.youtube.com/watch?v=VxKt245X_ws


If you had <50% accuracy in expectation you could improve your analysis by inverting it, I suppose...


And, to improve your chances further, you'd just need to know when you'd have more to gain by inverting your choice versus not inverting.


My read is that investing into AI is the trendy thing to do in 2024. No CEO will have to justify pouring money into AI; they would probably be seen as negligent if they don’t. It’s just the zero-risk strategy at this point in time. That’s not to say that there is no value to be realized, it’s more a question of how much value and at what price. If in the next 12-24 months we find out that AGI, robotics and the end of labor is not as imminent as some have hoped and that perhaps not every car dealership needs an AI chat assistant, and the first companies announce reductions in their AI spending, the dominos might begin to fall. Or not, let’s see.


And yet, once the money dries up after chasing another pointless hype, the "visionary" CEO will still keep their job - while starting yet another round of layoffs.


If anything, the post you are responding to explains why CEOs have to think this way, they have no real choice. No one really knows where AI is going, yet staying on the sidelines is not an option.


The job of a CEO is to make sure the company is viable and liquid longterm, communicating their thinking.

"Society made me do it" is not a strategy.


Umbra was recently spun out as CedarDB [1].

And Hyper is alive and well at Salesforce/Tableau! The team working on it is still in large parts the original Hyper team from TUM. You can actually download Hyper (as a binary with language bindings) and play around with it [2] for non-commercial use cases.

If you think Hyper/Umbra is cool, the TUM database group has lots of other very interesting projects going on at the moment. LingoDB [3] pushes the database-as-a-compiler idea to the extreme by implementing query optimization and compilation query compilation in MLIR. LingoDB is open-source. Also Viktor Leis, who stands behind (among many other things) Hyper's Morsel scheduling and ART indexes as well as Umbra's buffer management recently started a very interesting project [4] to heavily co-design the DBMS together with the OS in a unikernel approach. Really interesting stuff!

Disclaimer: I work on Hyper. Views are my own.

[1]: https://cedardb.com/ [2]: https://tableau.github.io/hyper-db/docs/ [3]: https://www.lingo-db.com/ [4]: https://www.cs.cit.tum.de/dis/research/cumulus/


Thanks, that seems to be exactly what I was looking for!

Any insights on the business model? Is this just to advertise the pro tier or do they monetize usage in another way (e.g. selling training data)?


Advertise pro. Sourcegraph doesn't sell or retain data with any LLM providers for both free and pro versions https://sourcegraph.com/terms/cody-notice


I believe this is a rational assessment in general. A lot of the discussion around this topic seems to be negligent of market dynamics.

However, the crux is in the details:

> You can increase the % enough so that overall demand for developers goes down or doesn't grow as much as it would have otherwise.

I would be at least skeptical of this. Every push for commodification that we've seen in the software space so far has been absorbed by demand. Will this continue forever? Nobody knows. At least where I work the backlog is filled to the brim, and every new iteration of tooling begets more babysitting to unlock the promised gains. And customers still have a never-ending list of hyper-specific feature requests.

The friends and colleagues at the Senior/Staff level who are using Copilot/GPT-4 (and have admittedly become much better than me at prompting) didn't exactly become "hyper-productive". Sure, they get code pushed out faster, but they still work long hours and complain about deadlines.

This is not to say that we're all fine forever and things will not change. But as long as we don't experience an across-the-board temperature shift in the job market decoupled from macro-economic events I wouldn't put too much attention there. In the end, doom scrolling is also just a form of procrastination.


First few lines are almost verbatim ChatGPT (without knowing your exact prompt): https://chat.openai.com/share/a7a9c996-f655-427d-bc23-4e9161...

Look — I appreciate that you want to help OP out here, but please keep HN free from low-effort LLM-generated answers like this. Everyone here has access to chatGPT, so the added utility of pasting responses from it is close to zero.


Good catch, insane how accurate you got the first part of the response.


When I started reading the parent comment, I had a sense that it was ChatGPT. I wonder why people do this. Is farming hacker news karma a thing?


I think these are bots feeling the water out, probably going to engineer the prompt now until no one calls them out..


As someone who never owned cryptocurrency and frankly never cared that much: could you ELI5 the idea of a Bitcoin ETF? Is it just a vehicle to trade Bitcoin (as in a single cryptocurrency; BTC) on a normal stock exchange? Is it an index-weighted portfolio of BTC and other (e.g. Ethereum)? Does the fund actually own BTC or is it technically a derivative like a synthetic (regular) ETF?


> Is it just a vehicle to trade Bitcoin (as in a single cryptocurrency; BTC) on a normal stock exchange?

Yes

> Is it an index-weighted portfolio of BTC and other (e.g. Ethereum)?

No

> Does the fund actually own BTC or is it technically a derivative like a synthetic (regular) ETF?

Yes, a spot ETF is backed by "physical" Bitcoin


How is it "physical" when it is only an IOU from Coinbase?


Retail stocks are usually held in street name. That system seems to be working.


The most talked about "Bitcoin ETF" is the one Blackrock filed an application for. I think it is this one:

https://www.sec.gov/Archives/edgar/data/1980994/000143774923...

My reading of it is that it will only cointain Bitcoin.

And that they will not actually own it (as in control it) but only own an email (or whatever paperwork) from Coinbase (or whoever the "custodian" will be) saying that Coinbase holds Bitcoin for the ETF. But I'm not sure. Happy to hear interpretations by others.


Sorta-kinda: VSCode lets you execute code from regular Python files in a Jupyter session [1] by demarcating code cells with "# %%"

[1] https://code.visualstudio.com/docs/python/jupyter-support-py...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: