I think the reimplementation in question rubs people the wrong way because of the intentions of parties on both ends and the ignoring of one of them by the other (erasure of, from some POV). The original author of the code obviously chose the license they did intentionally (copyleft "keep it open" reasons, seemingly). And the the rewrite author has their intentions as well (unknown beyond "less restrictions on derivative"). The problem comes when those intentions conflict, and in this case the rewrite author basically just ignored the usual convention to resolve the conflict, which is forking or just starting a new project. Claiming "I've maintained it for a while so I can do whatever I want" is kinda gross because is just completely overrides the original authors' intention with their own. They're basically saying "my intentions as maintainer are more important than the creator's", and that doesn't feel even. The "is it a real clean-room" due to prior exposure due to LLM training and working on the codebase is always going to be contentious. But "should I override erase someone else intentions?" question is easy to answer. No. Especially since we have come up with so many ways to make it easy not to (forking is practically free, the abstraction of APIs is powerful, etc).
It also just feels a little nefarious. There isn't much reason to change between those licenses in question beyond to allow it to be more tightly integrated into something commercial and closed-source. In which case, having an LLM write a compatible rewrite _in a new project_ seems reasonable at the current moment in time. It's this intentional overriding of the original intentions, seemingly _for profit_ as well, that is the grossest part, because the alternatives are just so easy and common.
If Theseus recreated the ship from the original plans but all new parts, created new plans, and then burned the original plans and original parts, it is the same ship? If yes, what if they (with some ship building magic) converted to the second one to have a completely open floor plan inside? Still the same ship?
That's not a mistake. You'd be getting spam marketing anyway, why not make sure it's something obvious? I always pick the oldest possible age when asked, just to mess with their data, because they shouldn't fucking care.
Don't limit, notify.
Has worked for TV (and movies to an extent, though theaters do limit somewhat, must have been some litigation around that...) pretty much forever.
If the content is mixed, it makes even more sense to have the content supply the age data. This is how it has worked with broadcast media pretty much forever. TV shows and movies gain their ratings based on the worst case on display. IE: a show doesn't have to consist entirely of swearing to gain a "language" warning, it just has to have some. Definitively mixed content.
I think your example exemplifies this. Among Us is not inherently adult-only, but since it's multiplayer, they don't control what other player say and do. Definitively mixed content. They should not be asking you to verify, they should be telling you and letting you decide if your kid can play.
I kinda can't beleive their lawyers decided to go that route and assume all the PII responsibility that comes with collecting that data, instead of just making the "it's online and there might be d-bags on our servers" rating much more obvious and explicit.
They can profit off of the personal data they collect, so it's no surprise they'd take any opportunity and use any available excuse to collect more of it. From their perspective there is effectively zero responsibility to secure that data properly and handle it safely because there are effectively zero consequences for companies when they fail to.
Because programmers made the LLMs, and they first applied it to the problems they know, so the examples of "replacing a programmer" are abundant. Then the hype train rolled in and now it's suddenly going to replace everything, just that software engineering is the low-hanging fruit since they already have "proof" that it works in that domain.
Hint: it actually doesn't work at real depth, and why not is fairly well explained in TFA: they hype always overestimates the depth of the field. So these advances do help to make easy thing easy (in the case of LLMs because they have been trained on a billion examples of the easy stuff), but don't really end up helping with the hard things (because they really only make new things that weren't encompassed in their training by getting lucky, and because tedious things are different than hard things).
CS programs have high attrition rates because programming or "coding" has been touted as easy money for a couple few decades now. When people find out it's not so easy, they bail. Holding a few layers of abstractions in your head is not something that everyone does easily.
Just as keeping most of the structure of a 4-novel-long story in your head is not something everyone can do, hence why being a successful author is not something that everyone can do. Start telling everyone that being a novelist is easy money, though, and you'll see Comp 101 courses filling up and the attrition rate correspondingly go through the roof.
Anyone could make their own tools before this as well. Just needed to learn something first.
Real democratizing or programming is free access to compilers, SDKs, etc. AI coding does nothing to help that. In fact, it hurts it, because those non-programmers only get access to the AI tools on the terms of the AI companies. Sure they could train their own models, but then we're back to having to learn things.
Then those "upgrades" will come down to just using an LLM as a lexer/parser for natural language and then calling a compiler on the generated AST. Except natural language is often very very ambiguous and removing that ambiguity by limiting the possible inputs just brings you closer and closer to a high level programming language. So why not just start there and use something way more efficient than an LLM for lexing/parsing? I'm not saying current high level languages are the endgame, they can certainly be improved and specialized and made faster. Just that the current architecture does not need to be replaced by statistical modeling, especially when you talk of making them deterministic with starting seeds... why bother forcing an LLM to follow the same deterministic path when we already know how to make tools to do that?
I hope people can see that "winning big" using that process is very unlikely NOT to be "winning long term".
(From GP) "AI coding sometimes sacrifices correctness or cleanness for simplicity, but it will win and win big as long as the produced code works per its users' standards."
Those user's standards are an ephemeral target for any software beyond a one-shot script or a hobby project with minimal user:dev ratio. That incorrect and unclean code simply isn't conducive to the many iterations needed when those "users' standards" change. And as we all know, that change is _inevitable_, and oftentimes happens before the software in question has even had a single release! Get ready to throw ever more tokens at trying to correct and clean if you ever really "win big" and need to actually support the product.
It's very much gross short-sighted thinking that goes right along with the gross short-sighted thinking providing all the [fake] value around this crap.
Fixed that for you.
The input is sooo much more than your prompt, that's kind of the point.