Hacker Newsnew | past | comments | ask | show | jobs | submit | echelon's commentslogin

I wonder if there's a critical failure mode / safety feature of our species for some percentage of the population to always dislike whatever some other large percentage of the population likes.

As if it's to prevent the species from over-indexing on a particular set of behaviors.

Like how divisive films such as "Signs", "Cloud Atlas", and even "The Last Jedi" are loved by some and utterly reviled by others.

While that's kind of a silly case, maybe it's not just some random statistical fluke, but actually a function of the species at a population level to keep us from over-indexing and suboptimizing in some local minima or exploring some dangerous slope, etc.


> I am wondering where to hang up my Techpriest robes in search of more elite pastures.

Capital and tech improvement will beat anyone chasing that.


> The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed?

It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.

If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.


> look at US top videos on YouTube any given day

I'd rather eat poison


We can have that discussion, or we can have the more interesting discussion of just how much big corporate intellectual property, franchises, and brands have their hooks in pop culture.

Big IP is strong arming OpenAI, Suno, and all the rest.

It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.

I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.

People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.

People want to generate IP.

edit: clarity


Personally I’m glad that big IP came in and smashed the AI companies like this. They been relentlessly ripping off smaller creators for some time now.

It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.

Also, seeing genuine original creations created with AI assistance is much more interesting to me


> Also, seeing genuine original creations created with AI assistance is much more interesting to me

The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.

In the same way that there's a difference between vibe coding and using a coding assistant...


> I want musicians to use AI to generate new sounds as part of composition.

As a onetime semi-pro musician, with decades of live performance and sound design experience:

I would rather burn my beloved instruments publicly and pee on the fire.


It depends how it is used. If it is an assist which generates sounds/samples that a musician can edit themselves, that seems fine. But spewing out a final form track from a prompt would just be slop.

Integrating AI with existing tools to improve productivity is harder and requires effort and investment...


As one whose musicianship involved a great deal of generating sounds and samples myself, via modular synthesis and the occasional use of a programming language for DSP, I assure you I find that idea of using genAI for an assist on that front offensive.

Could you use the bullshit machines to generate sounds that were nuanced, musical, and original, with enough time and effort?

Maybe. I'm not sure original is something they can do, but it's not totally implausible.

I would strongly recommend learning to use other tools for that purpose, instead of feeding the plagiarism monstrosities.


The aversion people like you have for AI is uncomfortable to me.

I understand your entire world model is shaped by your past and that this machine is changing the fundamentals.

As an outsider to music, I'm excited that I have access to something I previously did not through the use of Suno and other tools. I'm excited that I can come in and just try things and not hit a skill wall or quality barrier that would cause me to quit with the limited time and effort a working adult has. It's something I've wanted to do for a long time, but just never had the time for.

Attempting to learn costs thousands of hours before you can even start to feel good about it, and I don't have that time. Life is short and I'm already thinking about the end.

I used to be sympathetic to folks with your view, but now that programming and engineering are impacted by this - I'm in the crosshairs too. I'm subject to the same forces.

I've decided I love this tech even more. Claude Code is a tool, just like all of these other tools.

This rising tide of capabilities is so awesome. This is the space age stuff I dreamed about as a kid, and it's real and tangible.

So no, I won't restrict myself to your set of pre-approved tools. I'm going to have fun and learn my way.

And it is fun.

You can keep having fun the way you like to. What other people do shouldn't be ruining the fun you have, and if it is, then you should reevaluate why you do it.


I think he meant more like a synth. You could take recordings and process them using ai. At least this was my takeaway

I spent years deep in modular synthesis, making my own patches, sounds, and effects processors then using them to perform music.

Taking away the precision, control, and serendipity afforded by modules and cables, or a programming language, and telling me "Just describe what you want and the plagiarism machine will spit out whatever correlates with that description on average" would destroy everything I love about synthesis.


U are arguing against a person who isnt there. I also have done similar and my mind was not thinking specifically prompt the whole output. I think people have this kneejerk to anything that isnt total negativity of ai in the creative space. It is only a tool.

The nuclear bomb is only a tool.

Ditto nerve gas, and the rack.

Tools absolutely _can_ have moral valence.

Beyond that, they can also be more or less effective for a variety of purposes.

I spent decades to achieve solid competence at a few different skills, and my experience of genAI thus far is that it can easily give the user the delusion of mastery, ensuring the user does not develop true skills, trapped in the false belief that they can do everything they want to or ever would want to.

The process of struggling to learn new skills showed me new worlds of possibility I would never have discovered or explored without first developing those skills.

There are very legitimate reasons why so many artists and musicians hate genAI.


Pop culture is a fickle beast. What is pop culture is community made, not corporate made, and it can't be bought and sold like traditional markets. It's one of the few areas of life where nobodies can become somebody, and corporations hate this.

Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.


> Big IP is strong arming OpenAI, Suno, and all the rest.

> It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands

The problem is, to create a brand, you need to be able to protect it against rivals either ripping you off, or diluting it.

The same mechanism that protects "big" IP is also protect everyone else, even the small people.

> they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses

They already do that for music. But the issue is this, if we want culture, we need to find a way to pay for it. Is it possible for a bunch of mates to make enough money to live on playing in a local band? not really. They can only really make money if they either have a viable local gigging scene, or large enough online following to sell merch/patreon.

The big IP merchants were quite keen for videogen, because they sense that its possible to cut out the expensive artists. If they can not pay actors, writers, artists, then its way more profitable for them. This is part of the reason why AI hasn't been hit with the napster ban hammer.

I think the other thing to remember is that creating good IP is hard, and you can't really just pull it out of your arse after 5 minutes. The original seed takes a long time to refine, test, evolve. Even the half arsed sequels require work.


Maybe, but the Sora shutdown comes immediately after reaching a deal with Disney to use their IP. Which might have solved that problem.

> People wanted to use Sora for about a week. Then they lost the ability to generate IP.

Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests


Indeed!!

If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.

Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.


Weird. No activity or response to an obscure post beyond a couple upvotes. Then, the next day a brigade no-engagement downvotes. IDC, but seems like some corporate image management trying to hide negative takes on Google properties? Sheesh

>If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.

Where can I get this data?


A theme I have noticed in content oriented towards young children is a very heavy use of probably unlicensed depictions of famous characters from popular franchises. Is Nintendo collecting a royalty from “it’s raining tacos“? Probably not.

Top videos are Mr Beast and other youtube personalities.

Only because they promote it. The default experience for a new user on Youtube is to show you content from creators with 5M+ subscribers. It’s a positive feedback loop.

I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…


Hm, turns out they removed these last year:

https://variety.com/2025/digital/news/youtube-trending-page-...

Bummer. It used to be at:

https://www.youtube.com/feed/trending

So last year, these were the top videos:

https://web.archive.org/web/20250324155132/https://www.youtu...

There's this, but it's nowhere near as good as seeing the actual videos:

https://trends.google.com/trends/explore?gprop=youtube


That's my exact ranking as well.

What do you use it for?

The only thing I can think of this doing "usefully" for people is spamming social media.

The high friction things I'd want automated are precisely the things I'd worry it would fuck up.

There's not a lot of middle ground.


You're not the person I was asking, but I'll ask you the same:

Have you tried it?

I've listed how I've used it in several threads in the past - you can see my comment history. I tired of doing it every time this question comes up, because it usually becomes clear that the person asking has a very little understanding of its capabilities, and is going by mostly sensationalist reports (either reports of "Oh my God this is life changing" or "Oh my God you'll lose all your mail/money").

There is a (boring) middle ground, as will be obvious to anyone who spends a few days with it.


> No, but if I asked an intern to eat it for me, I wouldn't feel like I did anything at all.

That's a poor analogy.

If I asked an intern to implement a function, I know I did the instruction and that I worked through them. The intern did work, but I did fancy high level work and killed several birds with one stone.

Even better analogy: if I'm a film director, I'm working through a lot of people. The DP, the cast, the crew, the AD (though they're my boss, telling me what I can/can't budget for)...

The best analogy for AI is the "film director" analogy.

There are good directors and bad directors, good films and bad films. No director works alone (unless it's some kind of avant-garde film school project).

You wouldn't say a film director isn't doing work. That they can't be uniquely felt through their work. That what they're doing isn't hard, doesn't require talent/taste, and doesn't get better over time.

We're all basically becoming film directors.


So yeah, our job that we were all interested in has transformed into a different thing (directing), which some people are also interested in, and some aren't.

There's no substantive difference between directing an intern and directing people on a movie, by the way, except the number of people. If you never aspired to direct people, it's all kind of the same, and if you actively dislike it, I imagine directing more people would probably be worse!


Directors do work, but a different kind of work. Not really what most people would consider hands-on filmmaking. They're more like managers--telling others what to do, how to light this, how to shoot that, where the characters should be. It's work but it's not "making." If I want to make a film, I'm going to grab a camera and point it at something. If I wanted to tell other people to make a film, I'd become a director.

That's the major difference I feel between writing code and having an LLM do it. We're all being asked to become directors when we just want to make movies.


Making movies is hard. Ai basically made the smaller personal sized things easy, but substantial projects are still out of reach. There isn't anything for an individual to feel good about.

A "market" is hypothesized to be "efficient" at price discovery.

An efficient "prediction market" would more quickly resolve to its expected outcome due to not only skin-in-the-game bets by experts, but also the influence of insiders.

Furthermore, bets are likely to shape outcomes. Betting someone will be assassinated (not allowed on Polymarket) would likely increase the probability of that outcome had there been no bet at all.


I’ve heard this assassination thing said, but I don’t see it. Someone still has to kill said person and somehow profit from the unlikelihood of this outcome. I’m not saying it’s impossible but I’m just saying that it’s never happened and maybe never will.

There is no assassination market to test it against.

If there was, I think you'd see quite a few public figures on the list.

And I think it _would_ cause folks to die. Which is why it's banned or regulated. (I'm actually not sure what the legal status is, just that Polymarket and US prediction markets disallow it.)


Polymarket is not fully in the US and its betting on oil prices is not a CTFC-allowed market. This is after all a bet on a commodity, which is already regulated and already a product available on ICE. Technically Polymarket doesn't need to care about the assassination market rule (which is a CFTC rule for prediction markets), but I presume they adhere to it out of respect for the reasoning behind the rule.

"0.6 t/s"

This is a toy.

We need to build open infrastructure in the cloud capable of hosting a robust ecosystem of open weights.

And then we need to build very large scale open weights.

That's the only way we don't get owned by the hyperscalers.

At the edge isn't going to happen in a meaningful way to save us.


Is it though? I would say 'proof of concept' instead.

The fact that it's running on a phone now just sets the goalpost and gets everyone excited about it: add more RAM and GPU to the next iPhone and it's not a toy anymore. Co-incidentally, phone companies also have thousands of engineers sitting around wondering what to do in their next release to convince consumers to buy ...


'Toy' and 'proof of concept' are synonymous. What this really opens up is running non-toy models like Qwen3.5 35B-A3B, which are still considered very large in the mobile device context. Yes, it's too slow for interactivity, but if you acknowledge that it's supposed to deliver "Pro" level inference it works quite fine.

> add more RAM and GPU to the next iPhone and it's not a toy anymore

We're not going to get more RAM and GPU in consumer devices.

All of the supply is going into data center build outs. As the hyper scaler gamble on the future continues, we get left with weaker (or more expensive) devices - not stronger ones.

The market makers make more money if we're left to thin clients. They're also the ones who control supply and the shapes of devices.


I highly doubt the A20 Pro will be slower than the A19 Pro - particularly for AI workloads.

We're talking six orders of magnitude difference between 0.6t/sec and 35kt/sec.

While there are problems that can be solved with 0.6t/sec, particularly offline, at the edge, in the field applications, these are currently vastly outnumbered by other applications.

There's just no competing. Local sucks.


> There's just no competing. Local sucks.

absolutely, however this doesn’t mean we should abandon local. i can’t remember who, but someone in the ai nuts and bolts arena said “smaller local models is where the exciting stuff is happening right now. it’s the area real fast progression is happening.” and it seems to be true. new big models aren’t making near the leaps smaller models are.

it’s so important we keep moving forward on running locally for the same reason it was important for us to use open standards when building the internet. if we hadn’t we’d all be connected through aol with 10 hours/month allowed internet usage and termed in through a sun workstation renting cpu cycles from some mainframe company at like “you’ve got 10,000 cpu cycles left on your monthly plan, please deposit $500 for 5,000 more.”

while all of this this is before my time, i’ve heard and read so many horror stories about how people could only connect through dumb terminals to “you wouldn’t believe it, computers then were the size of buildings” 1000 miles away and had to sign up for workload timeslots. make no mistake, this is the future these companies want, they want us to rent everything and own nothing.


Local is enough for most users as long as they're willing to accept a non-realtime response - which is a real limitation (especially for personal agentic use) but not a very significant one. The hardware is not that expensive, a single user's needs aren't going to saturate a state-of-the art AI datacenter rack or anything like that. Not even for heavy agentic workloads.

You rent your broadband internet. It's not a foreign concept that we can't own all the infra.

I don't know why we can't just get over the local compute thing and instead build open infra and models in the cloud. That's literally the only way we'll be able to keep pace with hyperscalers.

Local is not going to benefit 99% of use cases. It's a silly toy.

If we build open infra for cloud-based provisioning and inference, we could build a future we still have some ownership in. We'd be able to fine tune large models for lots of purposes. We wouldn't be locked in to major vendors.


SK Hynix: "Hold my LPDDR5X"

i personally think we need to work towards both open weights in the cloud and local.

use the experience we gain from both to bolster the other.

a future where we are unable to locally run is kind of troubling. as is a future with no open cloud. we need both to stop some of the horrors the hyperscalers will happily inflict.


These models haven't been very good for long.

To assume progress stops here is silly.

I'm already growing tired of prognostications using the current status quo when the current status quo isn't even six months old.


> When you're done spending millions on tokens, years of development, prompt fine tuning, model fine tuning, and made the AI vendor the fattest wad of cash ever seen, you know what the vendor will do?

They'll hire the person who knows AI, not the human clinging onto claims of artisanal character by character code.

It's entirely possible to engineer well-designed and intentional systems with AI tools and not stochastically "vibe" your way into tech debt.

AI engineers will get hiring preference. That is until we're all replaced by full agentic engineering. And that's coming.


It's almost like I addressed my entire comment to vibe coders and NOBODY else, because other uses of AI are pretty valid

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: