Great stuff and very timely. I just started getting into using opencode and while I'm hugely optimistic about its capabilities and can use it personally without too much sweat, I was left hoping for something a bit more batteries included to give to my non technical colleagues so we can collaborate together. This looks to be exactly what we were looking for so I am looking forward to giving it a spin!
Yeah ! I feel like until we figure out the correct UX for non-technical people the right way would be a sort of hybrid. Where you'd set it up on a remote server (if you know opencode you know openwork) and you just then have non-tech people do a one time setup to connect to the remote and from then on you can easily extend capabilities.
This is the approach that I've taken with Open WebUI. It's a great piece of software for exposing a shared GPT interface but of course that's pretty primitive in the grand scheme of things compared to something like this. But I completely agree with what you're suggesting and I think it's the only practical way to get a multi disciplinary team collaborating with this kind of a tool.
This is the way. Branches are structurally cheap in git for a reason - to make it easy to do this! The only minor pain is that then I have to remember to clean up the unused branches for paths that didn't pan out but that isn't so bad anyways and helps me make sense of/remind me of what didn't work as I was trying to solve something.
I didn't even think about the analogy to sampling (and the prior controversy) but that is an even better analogy. Ultimately, the different between what's creative re-use and what's a ripoff is a matter of how skillfully it's done and there's a lot of controversy in the middle!
Wasn't it Picasso that said "good artist borrow, great artists steal?"
I've never heard an artist confident in their own ability complain about this because they're not threatened by other competent human artists knocking them off never mind an AI that's even worse at it.
AI not going to out-compete anyone on volume by flooding the marketplace because switching costs are effectively zero. Clever artists can probably find a way to grease controversy and marketing out of finding cases where they are knocked off, taking it as a compliment, and juicing it for marketing.
But I liked the Picasso quote when I was younger and earlier on in my journey as a musician because it reminded me to be humble and resist the desire to get possessive -- if what I was onto was really my own, people would like it and others could try to knock it off and fail. That is a lesson that has always served me very well.
I'm starting to think more and more in my older age that being 'great' isn't a good thing. I might actually prefer being good. We'll see how that thought plays out though; give me a couple more years
The whole idea of outcompeting on volume doesn't add up for music. It's a power law game not a commodities game. Spotify is playing a dangerous game trying to pretend that it is but I have little faith it won't destroy their business long term and turn them into a future Blockbuster or Macy's.
It isn't that simple. One of the implicit tradeoffs you make buying SaaS is that the overall cost of evolution (development and ongoing maintenance) is subsidized across all of the investment resources and customer base of the vendor. With CRM in particular, ecosystem integration is one of the heaviest buildouts there is because each point solution integration can very significantly in complexity and is also where the combinatoric explosion of misbehavior sets in.
When you decide to pull that in house, you are implicitly burdening yourself with the cost of the buildout as well as ongoing maintenance. True, you could probably knock together an okay v1 of CRM yourself inhouse. But are you really going to get it to and maintain production level quality over time at a lower total cost of ownership? I'm skeptical.
The theoretical party you are describing would probably be better served by simply avoiding Salesforce in favor of a next gen CRM that is both more cost effective and easier to customize. In enterprise contexts, even HubSpot is effectively next gen, but there are also products like Attio et al that have a ton of adoption and strong integration ecosystems (albeit not at the Salesforce level).
When you buy enterprise software from a vendor you are buying more than "just software" you are also hiring a company's services. And the inverse is correct as well, when you choose to build it in house, you are implicitly choosing to hire a team internally to resource all of the services you would've expected that vendor to provide.
Certainly, this tradeoff can still make a lot of sense for some companies. The acid test for that, in my opinion, is whether said company could (and would) actually successfully sell the product they build internally on the open market. If the answer to that is "yes", the prospect of turning a cost center into a profit center can potentially bear significant long term ROI to the company.
100%. As an EA you always examine and explain the trade-offs to the business. In many cases that trade-off will remain and buy vs build leans towards buying.
My point is the decision point has moved. Where five years ago there’d be zero discussion of building internally, those discussions are going to be very different.
I believe many tech oriented companies will pull SaaS capabilities inside and as GenAI developer tools improve, the line will keep moving.
And we don’t need SaaS professional services anymore. Claude Code replaces that entire business model.
A view I have which feels controversial these days is that GenAI code tools are a net asset to code being held by entities that "should" hold them (ie workloads where code is a profit center not a cost center) and a net liability to the opposite. SaaS is a very squishy word and includes a lot things which are more accurate to call tech enabled services; Salesforce being one of the best examples (but you could say the same about many ERPs).
Maybe my biggest disagreement with your view is I think it is simultaneously far too conservative and far too aggressive at the same time. Where there was previously a decision about buy vs build, I disagree with the belief that many tech oriented companies will pull SaaS capabilities inside as GenAI developer tools improve because the cost of "build" is actually not going to get comparatively cheaper compared to the risk of "buying" - if anything, the risk of "building" internally with GenAI tooling that you were considering "buying" is significantly higher unless you are prepared to truly own it, that is go head to head with the entire rest of the market focuses on building and selling that tool as their full-time corporate focus. The actual risk of owning a tool internally making allowances for GenAI tooling is a lot higher than folks realize because GenAI tooling has a lot more risk of creating vibeslop and the only way to avoid that is to be dedicated to producing that tool as one's full time job and serving the needs of the entire market that needs it in that process. This is impossible to do with an internal tool.
The flip side of that same realization is why I also believe that your view is too conservative. The company that might be more empowered to consider building Salesforce internally with better tooling is not competing with Salesforce -- they are competing with the market that is going to /takedown/ Salesforce and a future version of themselves that would've used the future successor to Salesforce. Such a company is probably not in the business of building and selling CRMs although they are likely in the business of using such a CRM. The risk of making that conflation would be to stretch existing resources thin and turn the high interest credit card of tech debt and turn it into a payday loan with GenAI vibeslop.
I do not view GenAI as a democratizer. I view it as an accelerator with the capabilities to accelerate not just "winners take all" dynamics for the top companies that invest enough to avoid vibeslop, but "losers lose all" dynamics in the supply chain for everyone else who tries to wing it to capitalize before inevitably losing against future incumbents, or try in vain to turn a legacy firm into an innovator before inevitably losing against tech debt.
Everyone likes to believe they'll be able to use these great tools to become a future incumbent. But becoming a future incumbent is something that is very hard to do unless your org + book of business + funding + tools is better than someone else's org + book of business + funding + tools. That is why I don't think it actually changes the buy vs build decision that much; the decision should probably still be to buy, it's just changing the question of what exactly should be bought.
Excellent response. I think my position is that I don’t vibe code and most senior/deeply experienced dev/arch’s are doing something else.
We’re not coding. We’re orchestrating well constructed applications that follow proven principles (DDD, behavior focused, packaged business capabilities, behavior unit testing).
Someone here has lost the plot and at this point I wonder if it is me. Is software supposed to be deterministic anymore? Are incremental steps expected to be upgrades and not regressions? Is stability of behavior and dependability desirable? Should we culturally reward striving to get more done with less.
...no, I haven't lost the plot. I'm seeing another fad of the intoxicated parting with their money bending a useful tool into a golden hammer of a caricature. I dread seeing the eventual wreckage and self-realization from the inevitable hangover.
i always thought my job was to be able to prove the correctness of the system but maybe the reality is that my job was actually just to sling shit at someone until they were satisfied.
I've never understood this argument. Do you ever work with other humans? They are very much not deterministic, yet they can often produce useful code that helps you achieve more than you could by yourself.
Run a model at all, run a model fast, run a model cheap. Pick 2.
With LLM workloads, you can run some of the larger local models (at all) and you can run them cheap on the unified 128G RAM machines (Strix Halo/Spark) - for example, gpt-oss-120b. At 4bit quantization given it's an MoE that's natively trained at NVFP4, it'll be pretty quick. Some of the other MoEs with highly compressed active parameter models will also be quick as well. But things will get sluggish as the active parameters increase. The best way to run these models is with a multi-GPU rig so you get speed and VRAM density at once, but that's expensive.
With other workloads such as image/video generation, the unified vram doesn't help as much and the operations themselves intrinsically run better on the beefier GPU cores, in part because many of the models are relatively small compared to LLM (6B-20B active parameters) but generating from those parameters is definitely GPU compute intensive. So you get infinitely more from a 3090 (maybe even a slightly lesser card) than you do from a unified memory rig.
If you're running a mixture of LLM and image/video generation workloads, there is no easy answer. Some folks on a budget opt for a unified memory machine with an eGPU to get the best of both worlds, but I hear drivers are an issue. Some folks use the Mac studios which while quite fast force you to be inside the Metal ecosystem rather than CUDA and aren't as pleasant for dev or user ecosystem. Some folks build a multi CPU server rig with a ton of vanilla RAM (used to be popular for folks who wanted to run DeepSeek before RAM prices spiked). Some folks buy older servers with VRAM dense but dated cards (thing Pascal, Volta, etc, or AMD MI50/100). There's no free lunch with any of these options, honestly.
If you don't have a very clear sense of something you can buy that you won't regret, it's hard to go wrong using any of the cloud GPU hyperscalers (Runpod, Modal, Northflank, etc) or something like Fal or Replicate where you can try out the open source models and pay per request. Sure, you'll spend a bit more on unit costs, but it'll force you to figure out if you have your workloads figured out enough to where the pain of having it in the cloud stings enough to where you want to buy and own the metal -- if the answer is no, even if you could afford it, you'll often be most happiest just using the right cloud service!
Ask me how I figured out all of the above the hard way...
Locally, I use a Mac Studio with a ton of VRAM and just accept the limitations of the Metal ecosystem, which is generally fine for the inference workloads I am consistently running locally (but I think would be a pain for a lot of people).
I can't see it making sense for training workloads if and when I get to them (which I'd put on the cloud). I have a box with a single 3090 to do CUDA dev if I need to but I haven't needed to do it that often. And frankly the Mac Studio has rough computational parity with a bit under a 3090 in terms of grunt, but with an order of magnitude more unified VRAM so it hits the mark for medium-ish MoE models I like to run locally as well as some of the diffusion inference workloads.
Anything that doesn't work great locally or which is throwaway (but needs to be fast) ends up getting thrown at the cloud. I pull it back to something I can run locally once I'm running it over and over again on a recurring basis.
Very cool! Managing ones boxes as cattle and not pets almost always seems like a better idea in retrospect but historically it is easier said than done. Moreover, I like the idea of being able to diff a box's actual state from a current Ansible system to verify that it actually is as configured for further parity between deployed/planned.
Definitely! It's all too easy to make a direct change and later forget to 'fold it in' to Ansible and run a playbook. My hope is that `enroll diff` serves as a good reminder if nothing else.
I'm pondering adding some sort of `--enforce` argument to make it re-apply a 'golden' harvest state if you really want to be strictly against drift. For now, it's notifications only though.
Ah, I remember scheming about buying an NF7-S + an Athlon XP Barton and unlocking it, combining it with a geforce 4 ti4200 and overclocking both but not even having enough of the pocket change to pull that off. By the time I was far enough along to have some of that in school, I picked up an A64 and a top of the line Geforce 5 from a black friday sale and had a great time gaming and coding.
Ironically, all the scheming I did about overclocking ended up being very unnecessary and I found it borderline impossible to actually stress the upper limits of the machine's muscle with day to day workloads and so all the research I put into overclocking was not really practically necessary, that it was freeing to not have to even think about the machine and instead focus on the work I wanted to do with the software I was using and building. Surely a lesson that continues to pay dividends, albeit from simpler times...
Are you experienced with DAWs as a composer or producer?
Many if not most professional producers use MIDI controllers with knobs/sliders/buttons MIDI mapped to DAW controls. As such the skeuomorphism actually plays a valuable role in ensuring that the physical instrument experience maps to their workflows. Secondarily, during production/mastering, producers are generally using automation lanes and envelopes to program parameters into the timeline, and the piano roll to polish the actual notes.
When I've historically done working sessions, the composition phase of what I'm doing tends to involve very little interaction with the keyboard, and is almost entirely driven by my interaction with the MIDI controller.
Conversely, when I'm at the production phase, I am generally not futzing with around with either knobs or the controller, and I am entirely interacting with the DAW through automation lanes or drawing in notes through the piano roll. So I don't really ever use the knob through a mouse and I've never really encountered any professional or even hobbyist musicians who do except for throwaway experimentation purposes.
reply