Hacker Newsnew | past | comments | ask | show | jobs | submit | scottmas's commentslogin

You don’t really say this in your article but are you pretty sure that you could’ve gotten the exact same results and probably with better convergence if you just use gradient, descent, and optimizers like Adam, etc?

To be clear I think GAs are way cooler though haha. So kudos to you for this awesome write up


Thanks! I do have a section on this in the article "Why genetic algorithms aren't state of the art"

"Physics simulation involves discontinuities (contacts, friction regimes), long rollouts, and chaotic dynamics where small parameter changes lead to large outcome differences. Even with simulator internals, differentiating through thousands of unstable timesteps would yield noisy, high-variance gradients. Evolution is simpler and more robust for this regime." "The real tradeoff is sample-efficient but complex (RL) vs compute hungry but simple (GA). DQN extracts learning signal from every timestep and assigns credit to individual actions."

DQN likely would have handled this much better.


Looks pretty awesome! Especially the native joins between warehouse tables and the OLTP db.

Will pricing likely just be a percent markup over the (excellent) Ubicloud prices they have listed? (https://www.ubicloud.com/docs/about/pricing)


Thank you for chiming in. Pricing is still TBD and will be finalized in the coming months before the service goes to GA. At a high level we plan to keep competitive also try to make it inclusive of the integration features too (native CDC + pg_clickhouse). Stay tuned!


So cool! Any languages support STM first class besides Haskell?


Scala supports it with for-comprehensions which are equivalent to Haskell's do-notation but STM is not part of the Scala standard library. Zio and Cats Effect are two popular Scala effects systems with STM.


Not "first class" but pretty good in Kotlin

https://arrow-kt.io/learn/coroutines/stm/


The new Verse lang by Epic Games & a core Haskell contributor has a lot of transaction features. I don’t know if it’s exactly the same as STM though.


Verse only supports single-threaded transactional memory. Epic hasn't yet demonstrated that their approach can actually scale to be used from multiple threads in a useful manner, though they claim that it will.


I believe Clojure has first class support for STM.


Looks like somebody made a Rust experiment back when Rust was new: https://docs.rs/stm/latest/stm/


Scala has great STM in the same way (monad-based).


I think a decade ago or so, people started trying to integrate STM in Pypy


There are c++ libraries that offer it.



The whole benefit over zod seems to be perf, so could you do some benchmarking? I wonder if it’s worth it


Hey, I did some benchmarks if you're interested - benchmarks results are in the README.md.

https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)

https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark suggested in another comment comparing Valibot against Zod)


Yes, the primary focus is memory efficiency; performance improvement is a side effect of that. From my own benchmarks, I have found that to be the case. If you're validating thousands of objects per second or working with memory constraints, the difference becomes quite significant. Happy to share the full benchmark code if you'd like to run them yourself!


You should include this benchmark in your repo and README if you want to build trust.

I think anything that declares itself as a performance improvement over the competition ought to prove it!


But how do you dump your entire code base into Gemini? Literally all I want is a good model with my entire code base in its context window.


I wrote a simple Python script that I run in any directory that gets the context I usually need and copies to the clipboard/paste buffer. A short custom script let's you adjust to your own needs.


Repomix can be run from the command line

https://github.com/yamadashy/repomix


Legal issues aside (you are the legal owner of that code or you checked with one), and provided it's small enough, just ask an LLM to write a script to do so . If the code base is too big, you might have luck choosing the right parts. The right balance of inclusions and exclusions can work miracles here.


Cursor can index your codebase efficiently using vector embeddings rather than literally adding all your text files into context. Someone else mentioned machtiani here which seems to work similarly.


Possible to run this in ComfyUI?


The repo has sample code and it is fairly easy to create a node that will do it.

You won't however have access to usual sampler, latent image, Lora nodes to do anything beyond basic t2i


Why? There is nothing to customize with Flux.


What do you mean there's nothing to customize with Flux, can you expand on this claim?


Before an LLM discovers a cure for cancer, I propose we first let it solve the more tractable problem of discovering the “God Cheesecake” - the cheesecake do delicious that a panel of 100 impartial chefs judges to be the most delicious they have ever tasted. All the LLM has to do is intelligently search through the much more combinatorially bounded “cheesecake space” until it finds this maximally delicious cheesecake recipe.

But wait… An LLM can’t bake cheesecakes, nor if it could would it be able to evaluate their deliciousness.

Until AI can solve the “God Cheesecake” problem, I propose we all just calm down a bit about AGI


These cookies were very good, not God level. With a bit of investment and more modern techniques I think you could make quite a good recipe, perhaps doing better than any human. I think AI could make a recipe that wins in a very competitive bake-off, but it’s not possible or for anyone to win with all 100 judges.

https://static.googleusercontent.com/media/research.google.c...


What would you say if the reply was "I need 2 weeks and $5000 to give you a meaningful answer"?


Heck, even theoretically 100% within the limitations of an LLM executing on a computer, it would be world changing if LLMs could write a really, really good short story or even good advertising copy.


They are getting better and better. I am fairly sure the short stories and advertising copy you can produce by pushing current techniques harder will also improve.

I don't know whether current techniques will be enough to 'write a really, really good short story', but I'm willing to bet we'll get there soon enough (whether that'll involve new techniques or not).


TikTok is the digital version of this


I mean... does anyone think that an LLM-assisted program to trial and error cheesecake recipes to a panel of judges wouldn't result in the best cheesecake of all time..?

The baking part is robotics, which is less fair but kinda doable already.


> I mean... does anyone think that an LLM-assisted program to trial and error cheesecake recipes to a panel of judges wouldn't result in the best cheesecake of all time..?

Yes, because different people like different cheesecakes. “The best cheesecake of all time” is ill-defined to begin with; it is extremely unlikely that 100 people will all agree that one cheesecake recipe is the best they’ve ever tasted. Some people like a softer cheesecake, some firmer, some more acidic, some creamier.

Setting that problem aside—assuming there exists an objective best cheesecake, which is of course an absurd assumption—the field of experimental design is about a century old and will do a better job than an LLM at honing in on that best cheesecake.


What would be interesting, is a system that makes some measurement of a person (eg analyse a video of them eating different cheesecakes or talking about their tastes or whatever), and bakes the best cheesecake specifically for them.

Then you can get a panel of 100 people and bake 100 cheesecakes for them.


And you really think an LLM-assisted program that follows experimental design principles wouldn't be able to find the best cheesecake per each judge's tastes and map and optimize the overall heuristic space of cheesecakes to find the best of the group according to multiple metrics..?


Not any better than, say, an ELIZA-assisted program that follows experimental design principles. At that point the LLM is superfluous.


You don't even need AI for that. Try a bunch of different recipes and iterate on it. I don't know what point you're trying to make.



I don’t get why the seller just wouldn’t do seller financing. Not that hard to structure so that they lose nothing if the buyer defaults (assuming a reasonably large down payment). I guess the only down side is not being able to get any equity out of the sale apart from the down payment. But the down payment along with the new income source probably should be plenty in most cases to be able to afford to purchase your next house.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: