The harness is the model "body", it's weight the cognition. Like in nature they develop together and the iteration of natural selection works at both.
If smaller labs (Zai, Moonshot, deepseek, mistral..) get together and embrace a harness, like opencode for example, as a consortium just by the power of "evolution across different environments" they might hit jackpot earlier than bigger labs.
Mistral recently came out with their own harness (vibe) and I feel like it was a massive missed opportunity vs throwing in with with aider or opencode.
It goes the other way around as well. DeepSeek has made quite a few innovations that the US labs were lacking (DSA being the most notable one). It's also not clear to me how much of distilled outputs are just an additional ingredient of the recipe rather than a whole "frozen dinner" so to speak. I have no evidence to say it's one way or the other, but my guess is the former.
Citation needed, SOTA labs surely has technical protection and legaleese against using them for training. It's been done in th past but what indicates this is still the case?
My experience trying LanceDB has been abysmal. It worked great on dev and small testing environments but as soon we tried production workloads it would get extremely slow. We shifted to PostgreSQL + pgvector and had absolutely no issues, even if it is not "engineered for multimodal data". Maybe we were doing something wrong but we did put effort in trying to make it work - it is this hard to get it performant?
I’m also curious to hear what challenges you encountered. I’ve used LanceDB for a few projects in production now and it’s worked out reasonably well.
The docs quality is spotty, and the lack of parity between the async and sync python API is frustrating, but otherwise it’s been great.
The only performance issues I’ve had have been A) not rebuilding indexes on an appropriate cadence, B) not filtering the search space enough for queries which bypass the index, or C) running search against millions of vectors on an object storage and expecting millisecond latency.
Please no. They don't have the best interests of React in mind.
They threw the resources behind RSC to make React, a framework for frontend reactivity, force opt-in for frontend reactivity. Meta is needed more than ever at this point, before React fully becomes a framework for burning compute on Vercel's infra.
They might not have the conflict of interest but they also don’t have the business interest either. Meta is a spyware company who makes all of their money from collecting personal data to sell to advertisers. They have zero incentive to dedicate any kind of significant resources to supporting millions of websites using their internal UI library.
I feel like mathematicians should be able to do a second doctorate level degree a few years after their first PhD, that must be in a adjacent field of their own, but not the same.
The purpose of a PhD is to certify that you're able to do independent research. Many researchers retrain (or just add a research interest) in adjacent fields during their postdocs or later. At that point it's just research.
Beside the habilitation example of rando234789 (https://news.ycombinator.com/item?id=44498702), in Russia (and Ukraine) there indeed exist two "doctorate levels": кандидат наук [Candidate of Sciences] and доктор наук [Doctor of Science].
reply