Hacker Newsnew | past | comments | ask | show | jobs | submit | fcanesin's commentslogin

46_255

The harness is the model "body", it's weight the cognition. Like in nature they develop together and the iteration of natural selection works at both.

If smaller labs (Zai, Moonshot, deepseek, mistral..) get together and embrace a harness, like opencode for example, as a consortium just by the power of "evolution across different environments" they might hit jackpot earlier than bigger labs.


Mistral recently came out with their own harness (vibe) and I feel like it was a massive missed opportunity vs throwing in with with aider or opencode.

But they rely on distilling the output of american leader models. Which will probably train against their own harness.

Someone has to do the baseline training, development, and innovation. it can't be clones all the way down


Why not? Humans are (very nearly) clones all the way down.

It goes the other way around as well. DeepSeek has made quite a few innovations that the US labs were lacking (DSA being the most notable one). It's also not clear to me how much of distilled outputs are just an additional ingredient of the recipe rather than a whole "frozen dinner" so to speak. I have no evidence to say it's one way or the other, but my guess is the former.

Citation needed, SOTA labs surely has technical protection and legaleese against using them for training. It's been done in th past but what indicates this is still the case?

this didn't stop the millions of copyrighted works used to train the models.

>Citation needed, SOTA labs surely has technical protection

They have unlimited APIs, as long as you pay, how would they control how you use them?

> and legaleese against using them for training.

It's a whole different jurisdiction, and in general chinese companies care way less about copyright infringement

https://en.wikipedia.org/wiki/Counterfeit_consumer_good https://en.wikipedia.org/wiki/Allegations_of_intellectual_pr... https://en.wikipedia.org/wiki/China%E2%80%93United_States_tr...


My experience trying LanceDB has been abysmal. It worked great on dev and small testing environments but as soon we tried production workloads it would get extremely slow. We shifted to PostgreSQL + pgvector and had absolutely no issues, even if it is not "engineered for multimodal data". Maybe we were doing something wrong but we did put effort in trying to make it work - it is this hard to get it performant?

I’m also curious to hear what challenges you encountered. I’ve used LanceDB for a few projects in production now and it’s worked out reasonably well.

The docs quality is spotty, and the lack of parity between the async and sync python API is frustrating, but otherwise it’s been great.

The only performance issues I’ve had have been A) not rebuilding indexes on an appropriate cadence, B) not filtering the search space enough for queries which bypass the index, or C) running search against millions of vectors on an object storage and expecting millisecond latency.


Great, thanks for the feedback! I work at LanceDB and will take these points into account (esp. the docs).

Curious what performance issues you faced. Was that in OSS LanceDB? And what were the challenges?

Inserts become increasingly slow. Became >10sec for a chat completion insert after 10_000 entries on k8s Longhorn atop NVMe.

Great stuff, now if could please do gemini-2.5-pro-code that would be great


Nice, congrats. But that O looks like an ass.


That's the corporate design of the industry: Why do AI company logos look like buttholes? - https://news.ycombinator.com/item?id=43649640


this, Vercel is at ~10B valuation with a business built atop React - they should and will probably take more of Meta space as stewards for it.


Please no. They don't have the best interests of React in mind.

They threw the resources behind RSC to make React, a framework for frontend reactivity, force opt-in for frontend reactivity. Meta is needed more than ever at this point, before React fully becomes a framework for burning compute on Vercel's infra.


I agree with this. I’d prefer to have Meta be the steward for React instead of Vercel because Meta does not have a conflict of interest.


They might not have the conflict of interest but they also don’t have the business interest either. Meta is a spyware company who makes all of their money from collecting personal data to sell to advertisers. They have zero incentive to dedicate any kind of significant resources to supporting millions of websites using their internal UI library.


Because Vercel makes money when components are rendered server side not client side.

I know almost nobody that even uses server side components. It's right out if your backend isn't node..


That is exactly why I stopped using React 2 years ago


Summed together with the study visa changes: Thanks Trump for helping solve Brazil's brain drain.


yes, and it started from today.


No, it's for new H1-Bs and renewals, and it starts tomorrow.


You are correct [although what was said at the oval office was different].


Missing a zero here for a realistic valuation of the indisputable market leader in the most important interface of computing.


I feel like mathematicians should be able to do a second doctorate level degree a few years after their first PhD, that must be in a adjacent field of their own, but not the same.


The purpose of a PhD is to certify that you're able to do independent research. Many researchers retrain (or just add a research interest) in adjacent fields during their postdocs or later. At that point it's just research.


It's possible! From somewhat famous mathematicians, at least Bela Bollobas has 2 PhDs: one in discrete geometry and one in functional analysis.

Try doing that in the modern academic environment tho..


Beside the habilitation example of rando234789 (https://news.ycombinator.com/item?id=44498702), in Russia (and Ukraine) there indeed exist two "doctorate levels": кандидат наук [Candidate of Sciences] and доктор наук [Doctor of Science].


I feel like most sciences should have this, it would accelerate science a lot via the cross-pollination of ideas and techniques.

But I can imagine that drawing connections between different branches of maths would be especially powerful, yes


check out the idea of a habilitation: https://en.wikipedia.org/wiki/Habilitation at least in germany, it is pretty much what you describe


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: