If I can run resume {session_id} within 30 days of a file’s latest change, there’s a strong chance I’ll continue evolving that story thread—or at least I’ve removed the friction if I choose to.
It seems unlikely that a file that hasn't changed in 30 days in an environment with a lot of "agents" cranking away on things is going to be particularly meaningful to revisit with the context from 30 days ago, vs using new context with everything that's been changed and learned since then.
These articles are largely based on a false equivalence of LLM=moat.
That's not the case. OpenAI is advancing on many fronts; codex, vectorStore, embeddings, response API, containers, batch processing, voice-to-speech, image generation... the list goes on.
Results from a one-shot approach quickly converge on the default “none found” outcome when reasoning isn’t grounded in a paper corpus via proper RAG tooling.
Can you provide more context to your statement? Are you talking about models in general? Or specific recent models? I'm assuming "one-shot approach" is how you classify the parent comment's question (and subsequent refined versions of it).
If some task has a known step-by-step pattern, then doing it step by step makes perfect sense. That is doing the thing. Taking the known shortest/best path.
Doing the thing is going to involve both direct steps, and indirect steps necessary to do the direct steps.
Not doing the thing involves doing things other than the shortest/safest/effective path to getting the thing done.
Probably. Their final mixing chain is quite interesting too. There's a thread on gearspace that I can't find right now, which details how they record stems into a Roland S760 sampler because it colours the sound in a pleasantly digital way.
If I can run resume {session_id} within 30 days of a file’s latest change, there’s a strong chance I’ll continue evolving that story thread—or at least I’ve removed the friction if I choose to.
reply