Hacker Newsnew | past | comments | ask | show | jobs | submit | verdverm's commentslogin

There is no sufficient prompt because this is trained into them during mid-late phases. It's ingrained into the weights

ime, my parents gave some of the worst advice in addition to being bigots

My closest friends are #1 because they know me, my history, and my vices


Sherry Turkle is a name to know on this subject, she's been studying it for decades across multiple technologies.

https://sherryturkle.mit.edu/

She uses the phrase "frictionless relationships" to refer to Ai chat bots and says social media primed us for this.

https://www.youtube.com/live/6C9Gb3rVMTg?t=2127

https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...


You can access Claude models with Google Cloud reliability via VertexAI. The caveat is that you cannot use your subscription, per-token pricing only.

I personally prefer per-token, it makes you more thoughtful about your setup and usage, instead of spray and pray.

You can also access the notable open weight models with VertexAI, only need to change the model id string.


I also use them per-token (and strongly prefer that due to a lack of lock-in).

However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.


I don't think this is quite right because it's the same model underneath. This problem can manifest more through the tooling on top, but still largely hard to separate without people catching you.

I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.

I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.


We tried this, but the quota for Opus models defaults to 0 on VertexAI and quota increase requests are auto-rejected.

Any tips?


What? There's no quota at all. You pay per token up to infinity.

There are in fact quotas and rate limits in VertexAI, albeit generous and automatically increased based on spend

You can use your subscription for Anthropic-hosted Claude models?

No, unless you count tricks which are explicitly against ToS

Don't know. I tried Anthropic directly a long time ago and was frustrated by their uptime issues. Seems it has not improved in the years since.

You mean Google Chaos Services as we call them?

I saw a funny skit where if free Claude instance was down for you, you could just ask Rufus, Amazon's shopping AI assistant, your math/coding question phrased as a question about a product, and it would just answer lol.

In my region a certain small bank had an AI assistant which someone neglected to limit, so you could put whatever there and not even phrase it as a question about a product.

By GitHub, this is marketing that they won't like their #1 competitor?

edit: apparently they have as well

> In December 2025, we announced our intention to introduce pricing for self-hosted runners so we could provide stronger support and keep investing in new features and ongoing improvements.

I assume this is their pipelines thing. We only use the webhooks to trigger self-hosted CI.


Wasp has been trying to convince people to use their DSL for a long time. It's from the low/no-code era. I've never seen it anywhere besides their developer marketing on HN.

Custom DSLs that Ai's don't know about are a bad idea, and also generally for many reasons because they are hard to get right. You are going to have to feed the context in on how to use them for every message. Wasp may be ok here, as they predate the inflection point. They still have a knowledge gap / usage issue. No one really uses or wants the DSL, so there are few humans who can be in the loop to make sure it works as intended.


Martin from Wasp here -> you are right that we kind of went overly into DSL, we are actually switching it to TS at the moment (experimental version already out for some time but now making it the main way to use Wasp), but not because of the AI, instead because we found it was too hard to maintain and develop. We thought custom ergonomics of it will be worth it, but turned out we didn't get much on that side, while we lost a lot by not using existing ecosystem of well known language.

Btw, AI actually works great for it. I am sure part is that the Wasp's DSL exists for some time now, but it actually worked well for the very start, because the DSL was quite simple (similar to JSON) and AI knows how to generalize very well.

So I wouldn't discourage people from writing DSLs because of AI -> AI can understand them very well -> but for the reasons of missing out on all of the benefits of using a strong host language and doing it as an embedded DSL in it. If you are doing your own, completely standalone DSL, you will need to implement a compiler, editor extensions, LSP, maybe module system if you need it, maybe package system/manager if you need it, ... . Although when I think about it, that is also easier now with AI, than it was before! Hm yeah actually maybe custom DSLs are a good idea these days, with AI doing most of the job for you. I still wouldn't go back to custom DSL for Wasp however because biggest thing for us is probably familiarity -> custom DSL just scares people off.


This resonates with my experience building app DSLs. I've moved to CUE instead of a general purpose PL, to remain in the declarative space.

The fundamental issue that remains is abstraction. Low/no code is not what developers want. The rationale you use for Ai and your product is the same one developers use for not choosing products like yours. It takes developers too far from how things actually work. That pain manifests more after development time. So while you can show some nice stats for the basics, they ROI doesn't manifest in the long run. Hiring is also a pain for anything so little used, network effect and all that. Wasp has been around a long time, yet never taken off. This is something we consider when looking for broad tech stack changes that something like Wasp requires.


Betteridge's Law

> Identity for agents is a hard problem

Which is why I would not trust a single commit, vibe coded project


ActivityPub is more of a zombie project than ATProtocol at this point. AP has plenty of problems this fresh account made to disparage omits.

I'm looking forward to a new protocol that combines the best of what we have with a robust permission system from the start.


Seems you wanted to respond to the OP? I was mainly inquiring about what's AS2.

> AP has plenty of problems this fresh account made to disparage omits.

Isn't that a problem with moderation instead? If ATProto becomes decentralized someday, it'll have the same issue


ATProto actually has a very good moderation design, user choice, anyone can label, and composable. Feeds are similarly well designed for federation.

https://bsky.social/about/blog/03-12-2024-stackable-moderati...

> Seems you wanted to respond to the OP?

yes


The moderation tools depend on the implementation of AP, but what I meant is that you depend on each instance's moderation/moderators to be effective at combating spam (and more).

A problem that ATProto will face once/if they really do get decentralized. If some instances are badly moderated, you will suffer the same as with AP


AT does not have instances like AP. You are not tied to the moderation choices of servers. Apps are where moderation happens and is a place where competition can occur. Moderation also largely happens at the network layer, so apps can share moderation or use third party moderation (that is not tied to any app)

It seems like you do not understand the architecture of ATProto and make claims that are not based in reality.


> Bluesky solved the DM case by adding E2E encryption using the Signal protocol

This is patently false. Bluesky DMs are not E2EE, they do not use Signal.

Germ is the MLS based system that a few bluesky users are on, but it started separate from ATProto and has had account integration to atproto added on later. The folks behind that are a separate entity from Bluesky. I'm not keen on this setup, I'd prefer an MLS scheme where there are more controlling entities of the servers.

I agree E2EE chat is not the foundation for a Discord alternative and that Colibri has poor messaging and understanding. Communities need permissions, UX needs visibility into the data for things like search. E2EE has unsolved scaling problems required for real world communities.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: