Hacker Newsnew | past | comments | ask | show | jobs | submit | btbuildem's commentslogin

> doesn't make financial sense to self-host

I guess that's debatable. I regularly run out of quota on my claude max subscription. When that happens, I can sort of kind of get by with my modest setup (2x RTX3090) and quantized Qwen3.

And this does not even account for privacy and availability. I'm in Canada, and as the US is slowly consumed by its spiral of self-destruction, I fully expect at some point a digital iron curtain will go up. I think it's prudent to have alternatives, especially with these paradigm-shattering tools.


I think AI may be the only place you could get away with calling a 2x350W GPU rig "modest".

That's like ten normal computers worth of power for the GPUs alone.


That's maybe a few dollars to tens of dollars in electricity per month depending on where in the US you live

> That's like ten normal computers worth of power for the GPUs alone.

Maybe if your "computer" in question is a smartphone? Remember that the M3 Ultra is a 300w+ chip that won't beat one of those 3090s in compute or raster efficiency.


I wouldn't class the M3 Ultra as a "normal" computer either. That's a big-ass workstation. I was thinking along the lines of a typical Macbook or Mac Mini or Windows laptop, which are fine for 99% of anyone who isn't looking to play games or run gigantic AI models locally.

Those aren't "normal" computers, either. They're iPad chips running in the TDP envelope of a tablet, usually with iPad-level performance to match.

Did you even try to read and understand the parent comment? They said they regularly run out of quota on the exact subscription you're advising they subscribe to.

Pot, kettle

Self-hosting training (or gaming) makes a lot of sense, and once you have the hardware self-hosting inference on it is an easy step.

But if you have to factor in hardware costs self-hosting doesn't seem attractive. All the models I can self-host I can browse on openrouter and instantly get a provider who can get great prices. With most of the cost being in the GPUs themselves it just makes more sense to have others do it with better batching and GPU utilization


If you can get near 100% utilization for your own GPUs (i.e. you're letting requests run overnight and not insisting on any kind of realtime response) it starts to make sense. OpenRouter doesn't have any kind of batched requests API that would let you leverage that possibility.

For inference, even with continuous batching, getting 100% MFUs is basically impossible to do in practice. Even the frontier labs struggle with this in highly efficient infiniband clusters. Its slightly better with training workloads just due to all the batching and parallel compute, but still mostly unattainable with consumer rigs (you spend a lot of time waiting for I/O).

I also don't think the 100% util is necessary either, to be fair. I get a lot of value out of my two rigs (2x rtx pro 6000, and 4x 3090) even though it may not be 24/7 100% MFU. I'm always training, generating datasets, running agents, etc. I would never consider this a positive ROI measured against capex though, that's not really the point.


Isn't this just saying that your GPU use is bottlenecked by things such as VRAM bandwidth and RAM-VRAM transfers? That's normal and expected.

No I'm saying there are quite a few more bottlenecks than that (I/O being a big one). Even in the more efficient training frameworks, there's per-op dispatch overhead in python itself. All the boxing/unboxing of python objects to C++ handles, dispatcher lookup + setup, all the autograd bookkeeping, etc.

All of the bottlenecks in sum is why you'd never get to 100% MFUs (but I was conceding you probably don't need to in order to get value)


In Silicon Valley we pay PG&E close to 50 cents per kWh. An RTX 6000 PC uses about 1 kW at full load, and renting such a machine from vast.ai costs 60 cents/hour as of this morning. It's very hard for heavy-load local AI to make sense here.

Yikes.. I pay ~7¢ per kWh in Quebec. In the winter the inference rig doubles as a space heater for the office, I don't feel bad about running local energy-wise.

And you are forgetting the fact that things like vast.ai subscriptions would STILL be more expensive than Openrouter's api pricing and even more so in the case of AI subscriptions which actively LOSE money for the company.

So I would still point out the GP (Original comment) where yes, it might not make financial sense to run these AI Models [They make sense when you want privacy etc, which are all fair concerns but just not financial sense]

But the fact that these models are open source still means that they can be run when maybe in future the dynamics might shift and it might make sense running such large models locally. Even just giving this possibility and also the fact that multiple providers could now compete in say openrouter etc. as well. All facts included, definitely makes me appreciate GLM & Kimi compared to proprietory counterparts.

Edit: I highly recommend this video a lot https://www.youtube.com/watch?v=SmYNK0kqaDI [AI subscription vs H100]

This video is honestly one of the best in my opinion about this topic that I watched.


Why did you quote yourself at the end of this comment?

Oops sorry. Fixed it now but I am trying a HN progressive extension and what it does is if I have any text selected it can actually quote it and I think this is what might've happened or such a bug I am not sure.

It's fixed now :)


> I regularly run out of quota on my claude max subscription. When that happens, I can sort of kind of get by with my modest setup (2x RTX3090) and quantized Qwen3.

When talking about fallback from Claude plans, The correct financial comparison would be the same model hosted on OpenRouter.

You could buy a lot of tokens for the price of a pair of 3090s and a machine to run them.


> You could buy a lot of tokens for the price of a pair of 3090s and a machine to run them.

That's a subjective opinion, to which the answer is "no you can't" for many people.


Did the napkin math on M3 Ultra ROI when DeepSeek V3 launched: at $0.70/2M tokens and 30 tps, a $10K M3 Ultra would take ~30 years of non-stop inference to break even - without even factoring in electricity. Clearly people aren't self-hosting to save money.

I've got a lite GLM sub $72/yr which would require 138 years to burn through the $10K M3 Ultra sticker price. Even GLM's highest cost Max tier (20x lite) at $720/yr would buy you ~14 years.


And it's worth noting that you can get DeepSeek at those prices from DeepSeek (Chinese), DeepInfra (US with Bulgarian founder), NovitaAI (US), AtlasCloud (US with Chinese founder), ParaSail (US), etc. There is no shortage of companies offering inference, with varying levels of trustworthiness, certificates and promises around (lack of) data retention. You just have to pick one you trust

Everyone should do the calculation for themselves. I too pay for couple of subs. But I'm noticing having an agent work for me 24/7 changes the calculation somewhat. Often not taken into account: the price of input tokens. To produce 1K of code for me, the agent may need to churn through 1M of tokens of codebase. IDK if that will be cached by the API provider or not, but that makes x5-7 times price difference. OK discussion today about that and more https://x.com/alexocheema/status/2020626466522685499

Doing inference with a Mac Mini to save money is more or less holding it wrong. Of course if you buy some overpriced Apple hardware it’s going to take years to break even.

Buy a couple real GPUs and do tensor parallelism and concurrent batch requests with vllm and it becomes extremely cost competitive to run your own hardware.


> Doing inference with a Mac Mini to save money is more or less holding it wrong.

No one's running these large models on a Mac Mini.

> Of course if you buy some overpriced Apple hardware it’s going to take years to break even.

Great, where can I find cheaper hardware that can run GLM 5's 745B or Kimi K2.5 1T models? Currently it requires 2x M3 Ultras (1TB VRAM) to run Kimi K2.5 at 24 tok/s [1] What are the better value alternatives?

[1] https://x.com/alexocheema/status/2016404573917683754


Six months ago I'd have said EPYC Turin. You could do a heck of a build with 12Ch DDR5-6400 and a GPU or two for the dense model parts. 20k would have been a huge budget for a homelab CPU/GPU inference rig at the time. Now 20k won't buy you the memory.

I don't think an Apple PC can run full Deepseek or GLM models.

Even if you quantize the hell out of the models to fit in the memory, they will be very slow.


Your $5,000 PC with 2 GPUs could have bought you 2 years of Claude Max, a model much more powerful and with longer context. In 2 years you could make that investment back in pay raise.

> In 2 years you could make that investment back in pay raise.

Could you elaborate? I fail to grasp the implication here.


> In 2 years you could make that investment back in pay raise.

you can't be a happy uber driver making more money in the next 24 months by having a fancy car fitted with the best FSD in town when all cars in your town have the same FSD.


But they don't have the same human in the loop though.

that software is called autonomous agents, the term autonomous has nothing to do with human in the loop, it is the complete opposite.

This claim has so many assumptions mixed in it's utterly useless

Unless you already had those cards, it probably still doesn’t make sense from a purely financial perspective unless you have other things you’re discounting for.

Doesn’t mean you shouldn’t do it though.


How does your quantized Qwen3 compares in code quality to Opus?

Not the person you’re responding to, but my experience with models up through Qwen3-coder-next is that they’re not even close.

They can do a lot of simple tasks in common frameworks well. Doing anything beyond basic work will just burn tokens for hours while you review and reject code.


It's just as fast, but not nearly as clever. I can push the context size to 120k locally, but quality of the work it delivers starts to falter above say 40k. Generally you have to feed it more bite-sized pieces, and keep one chat to one topic. It's definitely a step down from SOTA.

How do you manage scope creep (ie, context size), and contradictory information in the context?

Good question. We don’t pass the entire graph into the model. The graph acts as an index over structured notes. The assistant retrieves only the relevant notes by following the graph. That keeps context size bounded and avoids dumping raw history into the model.

For contradictory or stale information, since these are based on emails and conversations, we use the timestamp of the conversation to determine the latest information when updating the corresponding note. The agent operates on that current state.

That said, handling contradictions more explicitly is something we’re thinking about. For example, flagging conflicting updates for the user to manually review and resolve. Appreciate you raising it.


> That said, handling contradictions more explicitly is something we’re thinking about.

That's a great idea. The inconsistencies in a given graph are just where attention is needed. Like an internal semantic diff. If you aim it at values it becomes a hypocrisy or moral complexity detector.


Interesting framing! We’ve mostly been thinking of inconsistencies as signals that something was missed by the system, but treating them as attention points makes sense and could actually help build trust.

This was something that I was working on for a personal solution ( flagging various contradictory threads ). I suspect it is a common use case.

That’s interesting. Would be curious to know what types of contradictions you were looking at and how you approached flagging them.

As a corporate drone, keeping track of various internal contradictions in emails is the name of the game ( one that my former boss mastered, but in a very manual way ). In a very boring way, he was able to say: today you are saying X, on date Y you actually said Z.

His manual approach, which won't work if applied directly ( or more specifically, it will, but it would be unnecessarily labor intensive and on big enough set prohibitively so ), because it would require constant filtering re-evaluating all emails, can still be done though.

As for exact approach, its a slightly longer answer, because it is a mix of small things.

Since I try to track, which llm excel at which task ( and assign tasks based on those tracking scores ). It may seem irrelevant at first, but small things like: 'can it handle structured json' rubric will make a difference.

Then we get to the personas that process the request, and those may make a difference in a corporate environment. Again, as silly as its sounds, you want to effectively have a Dwight and Jim ( yes, it is an office reference ) looking at those ( more if you have a use case that requires more complex lens crafting ) as will both be looking for different things. Jim and Dwight would add their comments noting the sender, what they seem to try to do and issues they noted ( if any ).

Notes from Jim and Dwight for a given message is passed to a third persona, which will attempt to reconcile it noting discrepancies between Jim and Dwight and checking against other like notes.

...and so it goes.

As for flagging itself, that is a huge topic just by itself. That said, at least in its current iteration, I am not trying to do anything fancy. Right now, it is almost literally, if you see something contradictory ( X said Y then, X says Y now ), show it in a summary. It doesn't solve for multiple email accounts, personas or anything like that.

Anyway, hope it helps.


This was a really interesting read. Thanks for the detailed breakdown and the office references. The multi-persona approach is interesting, almost like a mixture of experts. The corporate email contradiction use case is not something we had in mind, but I can see how flagging those inconsistencies could be valuable!

GPS unit

I think the article nails it, on multiple counts. From personal experience, the cognitive overload is sneaky, but real. You do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn't free you from comprehending, evaluating and managing the work. It's intense.

For a very small number of people the hard part is writing the code. For most of us, it’s writing the correct code. AI generates lots of code but for 90% of my career writing more code hasn’t helped.

> you do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn’t free you from comprehending, evaluating and managing the work

I’m currently in an EM role and this is my life but with programmers instead of AI agents.


Also EM and it feels like now I have a team of juniors on my personal projects, except they need constant micromanaging in a way I never would for real people.

Does AI write 100% correct code? No, but under my watch it writes code that is more correct than anything that anyone else on the team contributed in past year or more. Even better when it is wrong I don’t have to spend literal hours arguing with it nor I have to be mindful how what I’m saying affects others feelings so I get to spend more time on actual work. All in all it’s a net positive

I agree.

I provide specific instructions, gotchas when prompting the Agent to write the code. I churn out more instructions quickly by using my voice.

Yes it makes mistakes, but it can correct them quickly as well. This correction loop takes more time if it is a human in my team working.


I never said it’s not a net positive - I said that writing more code won’t solve the problem.

> under my watch it writes code that is more correct than anything that anyone else on the team contributed in past year or more

This I don’t believe.


> For most of us, it’s writing the correct code.

I am not sure about this statement, aren't we always cutting the corners to make things ~95% correct at scale to meet deadlines with our staffing/constraints?

Most of us, who doesnt work on Linux kernel, space shuttles, and near realtime OSes, we were writing good enough code to meet business requirements


My point is that coming up with the business requirements was always the hard part (unless you’re writing a scheduler)

So you're saying AI doesn't help, and having reports is just like using AI (which you said doesn't help).

What's stopping you from becoming an IC and producing as much as your full team then? What's the point of having reports in this case?


Started referring to it as "speed of accountability".

A responsible developer will only produce code as fast as they can sign it off.

An irresponsible one will just shit all over the codebase.


This has been my experience too. I feel freed up from the "manual labor" slice of software development and can focus on more interesting design problems and product alignment, which feels like a bit of a drug right now that i'm actually working harder and more hours.

I'm not sure I would agree in totem. Freeing the minutia allows for a higher cognitive load on the bigger picture. I use AI primarily for research gathering, and refining of what I have, which has freed up a lot of time to focus on the bigger issues, and specifically in my case, zeroing in on the diamond in the rough.

> Freeing the minutia allows for a higher cognitive load on the bigger picture

I think we do agree -- the higher "big picture" cognitive load feels more expensive than the minutia cognitive load


do you think this is inherent in AI-related work, or largely due to the current state of the world, where it's changing rapidly and we're struggling to adapt our entire work systems to the shifting landscape, while under intense (and often false) pressures to "disrupt ourselves"? Put another way, if this was similarly true twenty years ago with the rise of Google, is it still true today?

That is fun though.

I hated the old world where some boomer-mentality "senior" dev(s) would take days or weeks to deliver ONE fucking thing, and it would still have bugs and issues.

I like the new world where individuals can move fast and ship, and if there are bugs and issues they can be resolved quickly.

The boomer-mentality and other mids get fired which is awesome, and orgs become way leaner.

Just because there are excess of CS majors and programmers doesn't mean we need to make benches that they can keep warm.


That has more to do with where you work than AI.

Some places have military grade paperwork where mistakes are measured in millions of dollars per min. Others places are 'just push it in fix it later'.

AI is not going to change that. That is a people problem. Not something you can automate away. But you can fire your way out of it.


For sure. I was replying to people not in that, it seems from the commenters here that is where they (and me) have worked or are working now. Whether it is their own company or some other place.

I've only ever worked at places that are at the bleeding edge and even there we had total slackers.


That’s totally orthogonal to what the OP is responding to though. Military software can be bleeding edge as well as extremely susceptible to error prone code which means you need to test more. Similar are the cases with financial softwares which are usually written in ocaml etc. observing your ability to comprehend, the places where you work must be “bleeding” profusely.

> I hated the old world where some boomer-mentality "senior" dev(s) would take days or weeks to deliver ONE fucking thing, and it would still have bugs and issues.

What does that even mean? Are you begrudged manager or enthusiastic youngster who is upset that “boomers” are not killing themselves by juggling thousands of tasks ADHD-style?


"Explain to me like I am five what you just did"

Then "Make a detailed list of changes and reasoning behind it."

Then feed that to another AI and ask: "Does it make sense and why?"


Garbage In, Garbage Out. If you're working under the illusion any tool relieves you from the burden of understanding wtf it is you're doing, you aren't using it safely, and you will offload the burden of your lack of care on someone else down the line. Don't. Ever. Do. That.

Then get rid of. They can keep 1/10 the humans, and have them run such agents.

The paper (rightfully) does not address this, but I'd like to speculate about the reasons why, overall, usage has been dropping.

I think it's because social media, as a whole, stopped providing any value to its users. In the early days it did bring a novel way to connect, coordinate, stay in touch, discover, and learn. Today, not so much.

It seems we are between worlds now, with the wells of the "old order" drying up, and the springs of the "new order" not found / tapped just yet.


I guess it's correlated to the commercialization of those platforms. The amount of content which is actually from your friends and families is declining and was replaced by adds and viral content. If facebook would've been from the beginning what it's now, we probably never would have named it 'social media' in the first place.

Big time. I visit once/year and I'm always amazed at how useless Facebook has become.

I could barely find any updates from my friends, my feed is now an endless stream of AI-generated videos.

What's the use for that?


Facebook has become a community hub for me more than anything. Mountain biking, snowmobiling, contracting, VJing - there are lots of groups out there with very real and human discussion. You couldn't pay me to go back to reddit for that stuff.

You're right though, rarely do people post to their own timeline these days. I think it's the 90:9:1 social media stasis playing out.


So the real question is what motivates Facebook decision makers to put their credible stuff behind such a low quality front page, that is seen as a joke by nearly everyone?

It would be like a reputable industry conference putting a troupe of low budget clowns doing carnival tricks in front of their entrances.


The question might be that a huge percentage of people still watch those videos and click on ads.

My mom, for instance, she might just scroll through all the slop and even believe it's all true, and click on an ad every once in a while—perhaps by mistake.


Ah, yes--that makes sense.

And it's better than Reddit because it's more personal and you get to meet people or even make friends..?


Doom scrolling. For people who get hooked it can be very very addictive.

I have a theory, but based only on my observation of younger family members; needless to say, it may be way off in aggregate. Apart from the obvious, I don't really see them posting on legacy social media platfoms ( fb and so on ). TikTok was commonly used, but I can't say if recent US moves actually caused younger people to limits its use. On the other hand, fragmented discords and the like did seem to start be more common.

Did you see people mourning the demise of forum software, when neatly maintained places oriented towards specific topics gave way to noisy and all-encompassing places like FB at Twitter?

I think these fragmented Discords are the return to the idea of specific, uncrowded, neatly maintained places, with a relatively high barrier to entry for a random person. Subreddits are a bit similar, but less insular.


One of the only differences between new Reddit and Discord is that Reddit has the courtesy of a public index.

I don't know much about Discord (my only experience being some years ago when I joined for an open source project and left soon after I noticed how incredibly use hostile it is) but I do know that if you create a single account it is trivial to join any "server" (which, despite the marketing is just a chatroom hosted on their servers).


We're gonna enter a new age/type of "lost media" as Discord remains popular year over year. It's a complete black hole unless you're manually backing things up. No possible Wayback Machine.

It's honestly a good thing. People should have social outlets where things are forgotten, not memorialized for all eternity.

Sure, but it's definitely not the return of forums and the fact it is being used in place of forums will cause trouble down the line.

It's a bad replacement for forums.

The Discords I'm active in are all everyday conversations, like big group chats. Some of them are funny/interesting and occasionally someone gives useful advice, but the vast majority are forgettable.

I think that people should publicly share valuable information (like great conversations or useful advice) and some of their typical conversations (a context summary for outsiders and history). But privacy and ephemeral-ness make people more open. It may be better to have a space for most conversations where they're not expected to be saved, or (because "not expected" in Discord relies on weak evidence and today's norms) guaranteed not to be saved.


It's not really a good thing for technical discussion and support topics though. Information that others might hope to find by searching the web is no longer discoverable that way.

They are forgotten for all useful intents and purposes, but a malicious asshole can and will memorialize everything you say on it.

Without a trusted third party doing something like this on a large scale, it doesn't really matter - because 'nah, that's just a fake.'

My wife and I were recently talking about how we kind of luck boxed into dodging a bullet when we had kids (which was rather late). But it's no wonder so many people had or are having so many issues growing up in a public social media era. It's not only your right, but responsibility, to say, believe, and generally do stupid things as a kid and a young adult. It's an important part of growing up. Nobody should ever have to worry about this period in their life following them around forever.


> it doesn't really matter - because 'nah, that's just a fake.'

The point of this sort of thing is that whether it is fake or not doesn't matter. Because it is possible for someone to record a log of your activities, someone claiming they have an incriminating log of your activities will be believed (By a very large number of people).

It might not be believed in a courtroom, but for the other 99.99% of life, we do not apply the same standards for reviewing evidence.

Whether the platform keeps logs isn't important - the platform won't weigh in on this sort of stuff anyways, unless there's a subpoena.


Yes for social outlets. For niche hobbies? old photos of specific milling machines used in machine shops on board US navy vessels? For 80's european automotive restoration? For repairing and restoring retro-computing devices? Terrible. Terrible Terrible Terrible.

Tbf most old forums seem to have lacked photo hosting so all that’s left is photobucket placeholders

> I do know that if you create a single account it is trivial to join any "server"

Only if it's public. There are many private Discord servers.

The way they do it is that the default set of permissions is basically none, but then there's a server role which actually gives you permissions to see the channels and post in them. So, anyone can join the server, but only people who have been granted this role (e.g. by admin) can do anything on it, or even see others.


Discord is still not the same and in my opinion inferior. It’s mostly synchronous chat with poor searchability, something very different from what forums used to be

I've said this for quite a while now. Social media has turned into a bitch fest. It's all you ever read nowadays and I'm tired of it. I'm sure most people are tired of it.

I fixed Facebook on my feed at least. I started aggressively unfollowing people who post or comment about politics constantly (even if I agree with them). Not unfriending, just unfollowing.

What’s left is a feed with pictures of my friends and family, important news about what’s going on in their lives, and trash talking about college football.

It’s great.


I tried that too, and wound up unfollowing everyone except the people who never post. Then my page gets filled with "suggested" content.

Isn't it also way too many posts from "suggested" pages, and way too many attention-stealing "reels"?

Didn't work for me. It barely even show me content that my friends create. It's all reaction videos and conspiracy nonsense. Even if you block those channels, another one with a slightly different name pops up.

I'm always surprised that papers don't include some "chat" apps as social media. I don't see Discord mentioned in this paper but I use it almost identically to how I used Facebook in like 2010 and at least among people I know that's very common. I think the use cases from more traditional "social media" has migrated a lot back to chat apps and those still provide a lot of value and are more widely used than ever.

Terminology shifted somewhere along the lines, because the nature of sites like Facebook changed. These sites were called "social networking" in the early days, since they connected people. These sites are called "social media" these days, which I assume is a reflection that the top-down nature of these sites are much more like traditional print/radio/television media.

The treatment of chat applications, online forums, etc. as social media has always felt strange to me for that reason. While the companies that offer those services may control the platform, control of interactions is limited to moderation and the content of those interactions is rarely created by a commercial interest.


I read a quote once that went something like (paraphrased): every organizational app has to compete with email and every form of social media has to compete with group texts. And I think that's accurate.

If you pick random people they'll have often very old group texts. Family, friend groups, etc. These are used to organize, disseminate news and so on. 10+ years ago, a lot of people did these things on FAcebook. Group texts work on all platforms. They don't have ads. They're chronological.

From an engagement perspective, algorithmic recommendations and ranking (ie the newsfeed) has "succeeded" but it killed the use cases that people now use group texts for. And I think the two are fundamentally incompatible.


If I think about my own use of social media (and I have a facebook account from waaay back in the day, shortly after they dropped the requirement for a US edu email address), I wonder what value it ever had, over and above just emailing those people I'd like to stay in touch with every-so-often (which is what I do now). The reason why facebook switched to an algorithmic feed is because the previous method was failing, people were starting to give up posting. Algorithmic feeds didn't kill social media, they were an attempt at keeping alive what was already moribund. Social media, in the strict sense (so, not just online clubs or societies), never needed to be invented.

Yes, this.

I miss the old social media. I'd love to have it back. Having moved several times to various corners of the world, I have dear family and friends who are scattered across multiple continents. It's difficult to maintain ongoing 1:1 connections across such distances, but I used to be able to keep up with them and their families -- and them with mine -- via social media. It felt genuinely communal.

And then the posts from them became increasingly interspersed with -- and eventually outright replaced by -- advertisements, rage bait from random people(?) I didn't know, and then eventually AI slop. All with the obvious goal of manipulating my attention and getting me to consume more advertising.

It felt absolutely gross. Not something I wanted my personal life to be associated with. I stopped posting. So did my friends. The end.

But I still miss the old social media, and would use it if it actually existed (not just as a technology or a business model, mind you, but as an actual network with the adoption needed to create those kind of connections).


real question, how much would you be willing to pay a month for access to a healthy social network? (borrowing another comments clarification on the term vs social media)

healthy as in: has real people that you really know. Has no ads or bots or ai slop. Isn't full of dark patterns that are designed to turn you into a doom scrolling zombie? and maybe even has features that actually help you to stay connected in a real way to other people? oh and you are actually the customer and not the product so won't just be a service to gather bulk data on your "consumer preferences" so other places can target ads...

because while the thing you want isn't a technology or a business model I think if you actually wanted it to exist you need both, and we all mostly agree the old model where the social internet is just ad/data supported is not a path to something good. so the very real question is how much would you be willing pay in dues for that social network? how much would your friends pay? if it existed would you push for them to go there?

its hard to imagine a paid service that is basically the web version of kale being popular enough to get to network effects scale vs tiktok's double fudge oreos, but it would need to start somewhere... and some people do choose kale over oreos.


Based on the time range, the decline of social media use after 2020 could be more strongly related to the decline of Covid and remote school/work

That's a plausible explanation for the whole paper, unfortunately. And not one mention of Covid was made.

In addition to the factors named by sibling comments, which I largely agree with, there is also the rise of short form entertainment on these platforms.

In 2004, social media was mostly text, images and low-fidelity game experiences like Mafia Wars. Compare to a bottomless scroll of immediate-attention-hook optimized, algorithmically targeted video content found on TikTok / Instagram.

The social behaviors got zombified out of the audience.


I visited Facebook pretty much only to see my mom's posts. Even as I literally unfollowed everyone and everything, Facebook still wouldn't show me the only content I chose to see.

There are still the Facebook groups, and I really wish we had forums instead of those.


It’s because social media is no longer primarily about being social

I proffer it’s because TikTok has advanced the idea social media is about consumption. It’s easier to watch than to create or contribute and as we have more options now to simply watch online, as more people stream as their primary source of media, the options are “work for free” or “watch”.

https://m.youtube.com/watch?v=A2P06aHpvnQ

I saw this on youtube yesterday. It is some animation directors micro social media website, limited to 50 people.

I dont care for the ethereum. But wouldnt it be cool if major social media platforms were like this?


If you optimize for engagement you create secondary effects that can drive users away.

If social media becomes addictive because it angers you constantly, that’s engaging but you may hate it. Enough people will realize it’s not worth the stress. The social media site just begins to be associated with negativity and anger - not fun.

It’s reasonable we hit peak social media in the US and enough people disengage to make the numbers come down. Though notably 2025 is not in this study.


"The old world is dying, and the new world struggles to be born: now is the time of monsters."

I'm confused, wasn't this already available via env vars? ANTHROPIC_BASE_URL and so on, and yes you may have to write a thin proxy to wrap the calls to fit whatever backend you're using.

I've been running CC with Qwen3-Coder-30B (FP8) and I find it just as fast, but not nearly as clever.


> As someone who generally stays out of politics

Such a wildly privileged take.

Unrelated: why offshore this? Why not end-to-end local development and production?


You haven't lived until you've paid municipal taxes to see one of these things at work: https://www.youtube.com/watch?v=rVFFsKArEFk

I've literally watched them approach a pothole full of water, blow the water out with compressed air, retract the blower while the pothole refills, excrete asphalt mix into the watery hole then pat it down and compress it with a roller -- then proceed to the next pothole, driving over and denting the just-"repaired" one.


Yeah, but on the plus side, you're supporting local jobs!

Even in the video it looks like it does a terrible job. Hilariously he drives past all the other potholes that was just shown at the beginning of the video.

Not phones. There are a handful of apps on these phones that are the equivalent of meth. The infrastructure powering those apps should be seized and repurposed for public good, the proprietors / owners should be charged with crimes against humanity and punished most severely.

If you think this is hyperbole or over-reacting, you're either too far gone, or part of the problem and should be included in the erasure.


Cool toy! As an avid snow enjoyer, I would like to point out that wind and snow interact in all kinds of interesting ways, and it would be super cool to see some of those captured in future iterations of a toy like that.

One of those is that snow doesn't necessarily get packed onto the wind-exposed side; those features tend to stay bare. The lee side is where the snow will accumulate, but with that too depends on the features. Tall / steep / abrupt features will tend to generate areas with no snow at all directly behind them. Gentle downwind slopes will accumulate massive snow drifts. This all of course depends on the quality of the snow itself, roughly related to the air temperature at the time of precipitation but not solely that.

A quick little tweak to add to the control panel could be "stickiness" - how much the snow pours vs sticks to itself. Cheap one and would give the toy some behavioural variation.


Here’s my take with some dither and vector fields.

http://nouveauxhivers.dub4powder.xyz/

an invitation for event I promoted recently.

Press ? for some stats on desktop.

Even though coding was agent-assisted (not fully vibed though), putting it together took two weeks something to get where it is and to see it working on all mobile. I also have 15+ years of precious experience with JS, but zero with webgl.


Impressive graphics! But the music loop is even better in my ears :-)


That's amazing! I'm going to need to up my game next year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: