Twitter was amazing, not because of people microblogging about breakfast, but because it gave people/companies/orgs a way to interact directly with their audience. If you want to know what Kix cereal had to say - you could follow Kix.
I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?
I had to double check that they'd removed the non-1M option, and... WTF? This is what's in `/config` → `model`
1. Default (recommended) Opus 4.6 with 1M context · Most capable for complex work
2. Sonnet Sonnet 4.6 · Best for everyday tasks
3. Sonnet (1M context) Sonnet 4.6 with 1M context · Billed as extra usage · $3/$15 per Mtok
4. Haiku Haiku 4.5 · Fastest for quick answers
So there's an option to use non-1M Sonnet, but not non-1M Opus?
Except wait, I guess that actually makes sense, because it says Sonnet 1M is billed as extra usage... but also WTF, why is Sonnet 1M billed as extra usage? So Opus 1M is included in Max, but if you want the worse model with that much context, you have to pay extra? Why the heck would anyone do that?
The screen does also say "For other/previous model names, specify with --model", so I assume you can use that to get 200K Opus, but I'm very confused why Anthropic wouldn't include that in the list of options.
What a strange UX decision. I'm not personally annoyed, I just think it's bizarre.
Thanks. I quickly burned through $100 in credit when I started using Opus 4.6 in OpenCode via OpenRouter. My session stopped and was getting an error not representative of credit availability, so was surprised after a few minutes when I finally realized Opus just destroyed those credits on a bullshit reasoning loop it got stuck in. Anthropic seems to know that the expanded context is better for their bottom line as they've defaulted it now.
And as others have said it's very easy to burn token usage on the $100/month plan. It's getting to the point where it's going to very much make sense to do model routing when using coding tooling.
Anthropic is not building good will as a consumer brand. They've got the best product right now but there's a spring charging behind me ready to launch me into OpenCode as soon as the time is right.
I'd like to use Opus with OpenCode right now to combine the best TUI agent app with the best LLM. But my understanding is Anthropic will nuke me from orbit if I try that.
You can use Opus with OpenCode anytime you want, just not with the Claude plan. You can use it via API with any provider, including Anthropic's API. You can use it with Github Copilot's plan. The only thing you can't do without getting banned is use OpenCode with one of Claude's plans.
I'm looking at their plans (https://github.com/features/copilot/plans) it seems like the limits might be pretty low, even with the Pro+ plan which is 2x the cost of Claude Pro. It seems like Claude Pro might be 10-20x the Opus tokens for only twice the price.
You don't. Most of the time (after the first prompt following a compaction or context clear) the context prefix is cached, and you pay something like 10% of the cost for cached tokens. But your total cost is still roughly the area under a line with positive slope. So increases quadratically with context length.
I remember wrestling with this in my therapist's office when Aaron died. I had known him tangentially - we hung out in the same IRC channels, and had several mutual friends in the Cambridge/Somerville techie crowd that he would hang out in person with.
As a college student and young adult I had always envied his fame, his intelligence, his money (post-Reddit acquisition), and the strength of his convictions. And yet, in that moment in early 2013, he was dead, and I was working a good job at Google (and this was 2013 Google, when it was still a nice place to work doing things that I could generally approve of). And he'd died doing the stuff that I wanted to do but had been too chickenshit to actually carry out.
I think that this illustrates why the world is the way it is. All the true altruists are dead, killed for their altruism. It is adaptive, in a survival sense, to think of yourself and your own survival and not worry too much about other people. Ironically, this is what my therapist was trying to get me to realize.
But I think this also goes back to the GP's point. When people at wealth level x give to people at level x-1, it doesn't raise the people at x-1 up to x. It brings the person at x down to x-1. There are more people at x-1 than x, after all; you could give everything you had away and mathematically, it would lower your net worth significantly more than it would raise theirs. And of course, it doesn't do a damn thing about the people at x+1. Why can't they donate instead, where their wealth would do an order of magnitude more good?
There actually do exist people who are like that: they would rather spread their wealth around the people at wealth level x-1, joining them at that level, than raise themselves up to x+1. I've met some; most poor people are far more generous than rich people are. That is why they are poor. But then, it doesn't solve the problem of inequality, they just disappear into the masses of people at level x-1.
Of course because that’s how marginal tax rates work.
As to how much actual money was taxed at 91%, we don’t really have records for that but certainly the top 0.01% paid significantly more in taxes as a rate than they do today.
I know “ride the success of free and mooch off their success eg cursor” is the new rage these days, but doesn’t feel like Excalidraw is the giant corporate beast worth stealing business from. “I love this but don’t want to support them and better yet want to profit off their free tier” isn’t a great position to take.
I don't think the current approach lead to Gen AI in any practical sense and I don't think LLMs are reliable enough nor will they be reliable enough to implement cross system and provide decision making authority to. e.g. "hey, "AI, book me a flight to Miami for next Wednesday." You may be able to do something like this, but it would require as many steps as if you did it through the airline website and the chance of it booking an undesirable flight are high, versus just doing it yourself. I bring this up because this is always a demo. It was a demo during the voice assistant boom / craze and it was a demo with these LLM AI models. The problem is AI works 80-90 percent of the time for simple tasks and pretty much 50-50 for most complex tasks. That gap will close a bit more, but it needs to be 99.99% reliable to be trusted and anything much short of that means that it is effectively untrustworthy to do anything important.
Many demos have been proven to be faked or cherry picked to provide a scenario where the AI would succeed under those very specific prompts but any deviations would fail. Just do a search, Google, OpenAI, and many other have faked or exaggerated features and capabilities.
I can tell you investors think from the demos, some of which have been proven to be faked, that this leads to gen AI that can do anything, completely autonomously. That it will be able to do what it can do for basic coding and writing press releases for literally everything. And it can't and it wont. And what it can do it does very expensively. Look at driver less cars. One of the first big problems we have tried to solve with LLMs and machine learning and we still can't reliably trust cars to drive themselves without doing a lot of upfront work for a specific city. Don't get me wrong where we are with driver assists and robo taxis is incredible, but the investment has been far greater than the return and may always be. And once investors understand that fully. Once they realize that the technology IS incredible, but the economics will almost never work out. They are gone. Once they are gone Open AI, Anthropic, with their multi-billion dollar burn rate quickly need to cut costs and / or find a buyer. The only buyers who can afford it and run them will be Google, Apple, Amazon, and Microsoft and they too will be looking to reduce costs and exposure when the bubble bursts, so they will focus on efficiency of models even at the cost of function and features.
I think you’re a magnitude or two off in the current state of wealth inequality. It’s more like people are asking you with a 400k/yr salary to pitch in to buy candy bars and you’re upset someone picked out a king size for fifty cents more.
Socialist policies would be decidedly less popular if people knew that most of the money to fund them would come from middle and upper middle class earners.
While those people as individuals have nowhere near "billionaire" money, they as a contingent have the most wealth.
While the "1% have more money than the bottom 50%" is true, they have less than half the money of the 70%-95%. America's cash cow is in the suburbs, not the Hamptons. Kinda forbidden knowledge to know that.
I have heard that story, but so far I have yet to see a broken USB-C plug. I have seen broken USB-C receptacles tho, levered off the PCB. But there are sturdy variants of those as well.
reply