I also use them per-token (and strongly prefer that due to a lack of lock-in).
However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.
I don't think this is quite right because it's the same model underneath. This problem can manifest more through the tooling on top, but still largely hard to separate without people catching you.
I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.
I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.
I saw a funny skit where if free Claude instance was down for you, you could just ask Rufus, Amazon's shopping AI assistant, your math/coding question phrased as a question about a product, and it would just answer lol.
In my region a certain small bank had an AI assistant which someone neglected to limit, so you could put whatever there and not even phrase it as a question about a product.
By GitHub, this is marketing that they won't like their #1 competitor?
edit: apparently they have as well
> In December 2025, we announced our intention to introduce pricing for self-hosted runners so we could provide stronger support and keep investing in new features and ongoing improvements.
I assume this is their pipelines thing. We only use the webhooks to trigger self-hosted CI.
Wasp has been trying to convince people to use their DSL for a long time. It's from the low/no-code era. I've never seen it anywhere besides their developer marketing on HN.
Custom DSLs that Ai's don't know about are a bad idea, and also generally for many reasons because they are hard to get right. You are going to have to feed the context in on how to use them for every message. Wasp may be ok here, as they predate the inflection point. They still have a knowledge gap / usage issue. No one really uses or wants the DSL, so there are few humans who can be in the loop to make sure it works as intended.
Martin from Wasp here -> you are right that we kind of went overly into DSL, we are actually switching it to TS at the moment (experimental version already out for some time but now making it the main way to use Wasp), but not because of the AI, instead because we found it was too hard to maintain and develop. We thought custom ergonomics of it will be worth it, but turned out we didn't get much on that side, while we lost a lot by not using existing ecosystem of well known language.
Btw, AI actually works great for it. I am sure part is that the Wasp's DSL exists for some time now, but it actually worked well for the very start, because the DSL was quite simple (similar to JSON) and AI knows how to generalize very well.
So I wouldn't discourage people from writing DSLs because of AI -> AI can understand them very well -> but for the reasons of missing out on all of the benefits of using a strong host language and doing it as an embedded DSL in it. If you are doing your own, completely standalone DSL, you will need to implement a compiler, editor extensions, LSP, maybe module system if you need it, maybe package system/manager if you need it, ... . Although when I think about it, that is also easier now with AI, than it was before! Hm yeah actually maybe custom DSLs are a good idea these days, with AI doing most of the job for you. I still wouldn't go back to custom DSL for Wasp however because biggest thing for us is probably familiarity -> custom DSL just scares people off.
This resonates with my experience building app DSLs. I've moved to CUE instead of a general purpose PL, to remain in the declarative space.
The fundamental issue that remains is abstraction. Low/no code is not what developers want. The rationale you use for Ai and your product is the same one developers use for not choosing products like yours. It takes developers too far from how things actually work. That pain manifests more after development time. So while you can show some nice stats for the basics, they ROI doesn't manifest in the long run. Hiring is also a pain for anything so little used, network effect and all that. Wasp has been around a long time, yet never taken off. This is something we consider when looking for broad tech stack changes that something like Wasp requires.
The moderation tools depend on the implementation of AP, but what I meant is that you depend on each instance's moderation/moderators to be effective at combating spam (and more).
A problem that ATProto will face once/if they really do get decentralized. If some instances are badly moderated, you will suffer the same as with AP
AT does not have instances like AP. You are not tied to the moderation choices of servers. Apps are where moderation happens and is a place where competition can occur. Moderation also largely happens at the network layer, so apps can share moderation or use third party moderation (that is not tied to any app)
It seems like you do not understand the architecture of ATProto and make claims that are not based in reality.
> Bluesky solved the DM case by adding E2E encryption using the Signal protocol
This is patently false. Bluesky DMs are not E2EE, they do not use Signal.
Germ is the MLS based system that a few bluesky users are on, but it started separate from ATProto and has had account integration to atproto added on later. The folks behind that are a separate entity from Bluesky. I'm not keen on this setup, I'd prefer an MLS scheme where there are more controlling entities of the servers.
I agree E2EE chat is not the foundation for a Discord alternative and that Colibri has poor messaging and understanding. Communities need permissions, UX needs visibility into the data for things like search. E2EE has unsolved scaling problems required for real world communities.
reply