I think there's demonstrably very little difference at all between human and AI outputs, and that's exactly what freaks people out about it. Else they wouldn't be so obsessed with trying to find and define what makes it different.
The Thesis of Everything is a Remix is that there is no difference in how any culture is produced. Different models will have a different flavor to their output in the same way as different people contribute their own experiences to a work.
> I think there's demonstrably very little difference at all between human and AI outputs
Bold claim, as the internet is awash with counterexamples.
In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way. I often wonder if AI influencers have ever critically assessed art, or if they actually don’t understand it because of a lack of empathy or something.
And relatability, for me, is the ultimate value of artistic expression.
> Bold claim, as the internet is awash with counterexamples.
What do you consider a counterexample? Because I've been involved in local politics lately, and can say from experience that any foundation model is capable of more rational and detailed thought, and more creative expression, than most of the beloved members of my community.
If you're comparing AI to the pinnacle of human achievement, as another commenter pointed to Shakespeare, then I think the argument is already won in favor of AI.
> I think there's demonstrably very little difference at all between human and AI outputs
Counterexamples range from em-dashes, “Not-this, but-that”, people complaining about AI music on Spotify (including me) that sounds vaguely like a genre but is missing all of the instrumentation and motifs common to that genre.
The rest of your comment I don’t even know how to respond to, to be honest.
You’re really going to make the claim that there are no counterexamples of human and AI output being indistinguishable on the internet? At least make the counterclaim that “those are from old models, not the newest ones”, that’s more intellectually invigorating than the comment you just provided.
> claim that there are no counterexamples of human and AI output being indistinguishable on the internet?
Is that a claim I've made? I don't see it anywhere. I think a lot of people think that because they can get the AI to generate something silly or obviously incorrect, that invalidates other output which is on-par with top-level humans. It does not. Every human holds silly misconceptions as well. Brain farts. Fat fingers. Great lists of cognitive biases and logical fallacies. We all make mistakes.
It seems to me that symbolic thinking necessitates the use of somewhat lossy abstractions in place of the real thing, primarily limited by the information which can be usefully stored in the brain compared to the informational complexity of the systems being symbolized. Which neatly explains one cognitive pathology that humans and LLMs share. I think there are most certainly others. And I think all the humans I know and all the LLMs I've interacted with exist on a multidimensional continuum of intelligence with significant overlap.
I hereby rebuff your crude and libelous mischaracterization of my assertion. How's that? :)
You said AI works were easily distinguishable via em-dashes and "not this, but that"
I said I have witnessed humans using that metric accuse other humans here on hackernews. Q.E.D.
You've asserted that they are easily distinguished. Practitioners in the field fail to distinguish using the same criteria. Is that not dispositive? Seems like it to me.
I claimed much earlier in the thread "I think there's demonstrably very little difference at all between human and AI outputs" which is consistent with "I think all the humans I know and all the LLMs I've interacted with exist on a multidimensional continuum of intelligence with significant overlap."
Two ways of saying the same thing.
Both of them suggesting that sometimes you may be able to tell it's the output of an AI or Human, sometimes not. Sometimes the things coming out of the AI or the Human might be smart in a way we recognize, sometimes not. And recognizing that humans already exist on quite a broad scale of intelligences in many axes.
I was not saying that LLMs cannot produce something like pinnacle of human achievement. I was saying we cannot quantify the difference between Shakespeare and something commonplace, because it requires the ability to feel.
In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way.
I recently watched "Come See Me in the Good Light", about the life and death of poet Andrea Gibson. I find their poetry very moving, precisely because it's dripping with human emotion.
Or at least, that's the story I tell myself. The reality is that I perceive it to be written by a human full of emotion. If I were to find out it was AI, I would immediately lose interest, but I think we're already at the point where AI output is indistinguishable from human output in many cases, and if I perceive art to be imbued with human emotion, the actuality of it only matters in terms of how it shapes my perception of it.
I'm not really sure where we'll go with that from here. Maybe art will remain human-created only, and we'll demand some kind of proof of its provenance of being borne of a human mind and a human heart. Or maybe younger generations will really care only about how art makes them feel, not what kind of intelligent entity made it. I really don't know.
> demonstrably very little difference at all between human and AI outputs
Is there "demonstrably" a lot of difference between Shakespeare and an HN comment?
The point is exactly that there is no such difference. And that it enables slop to be sold as art. And that exactly is the danger. But another point is we had the even before LLMs. And LLMs just make it more explicit and makes it possible at scale.
Conrad Gessner had the very same complaint in the 16th century, noting the overabundance of printed books, fretting about shoddy, trivial, or error-filled works ( https://www.jstor.org/stable/26560192 )
Agreed. It really requires an understanding of not just the software and computer it's running on, but the goal the combined system was meant to accomplish. Maybe some of us are starting to feed that sort of information into LLMs as part of spec-driven development, and maybe an LLM of tomorrow will be capable of noticing and exploiting such optimizations.
Absolutely. I have written a small but growing CAD kernel which is seeing use in some games and realtime visualization tools ( https://github.com/timschmidt/csgrs ) and can say that computing with numbers isn't really even a solved problem yet.
All possible numerical representations come with inherent trade-offs around speed, accuracy, storage size, complexity, and even the kinds of questions one can ask (it's often not meaningful to ask if two floats equal each other without an epsilon to account for floating point error, for instance).
"Toward an API for the Real Numbers" ( https://dl.acm.org/doi/epdf/10.1145/3385412.3386037 ) is one of the better papers I've found detailing a sort of staged complexity technique for dealing with this, in which most calculations are fast and always return (arbitrary precision calculations can sometimes go on forever or until memory runs out), but one can still ask for more precise answers which require more compute if required. But there are also other options entirely like interval arithmetic, symbolic algebra engines, etc.
One must understand the trade-offs else be bitten by them.
Back in the early, early days, the game designer was the graphic designer, who also was the programmer. So, naturally, the game's rules and logic aligned closely with the processor's native types, memory layout, addressing, arithmetic capabilities, even cache size. Now we have different people doing different roles, and only one of them (the programmer) might have an appreciation for the computer's limits and happy-paths. The game designers and artists? They might not even know what the CPU does or what a 32 bit word even means.
Today, I imagine we have conversations like this happening:
Game designer: We will have 300 different enemy types in the game.
Programmer: Things could be really, really faster if you could limit it to 256 types.
Game designer: ?????
That ????? is the sign of someone who is designing a computer program who doesn't understand the basics of computers.
I wrote the Intellivision Mattel Roulette cartridge game back in the 1970s. It was all in assembler on a 10 bit (!) CPU. In order to get the game to fit in the ROM, you had to do every feelthy dirty trick imaginable.
I wish I'd kept a listing of that and other projects I worked on. But that never occurred to me.
A friend of mine wrote the Mattel Intellivision poker game. I was playtesting it (a very boring job), and got suspicious. I walked over to his desk and said the program was cheating. It was looking at my hole cards. He sighed and asked how I knew, and I replied it was obvious. He said he didn't have room to add code to improve its play otherwise. I don't know if he fixed it or not.
Not all, or even most, games are made by billion dollar studios. Overlapping roles are still the norm in small studios. And even those that do have bespoke designer roles would likely benefit from telling them that computers have certain limitations where trade-offs in game design need to be selected for, because many AAA games run like shit. Many times for reasons other than the game design, sure, but also sometimes because of ways that could be worked around more easily if the game design were accomodating the tradeoffs.
P40 is Tesla architecture which is no longer receiving driver or CUDA updates. And only available as used hardware. Fine for hobbyists, startups, and home labs, but there is likely a growing market of businesses too large to depend on used gear from ebay, but too small for a full rack solution from Nvidia. Seems like that's who they're targeting.
I suppose if I rent a cloud GPU and just let it sit there dark and do nothing then I wouldn't have to move any data to it. Otherwise, I'm uploading some kind of work for it to do. And that usually involves some data to operate on. Even if it's just prompts.
So you also believe when you rent a server you are sharing your data with the cloud? AWS and GCP are copying all private data on servers? Give me a break. There's a big difference between renting a server and using an API.
> So you also believe when you rent a server you are sharing your data with the cloud [hosting provider]?
Only if you upload your data to that cloud server you rented. Then, by definition, you are.
> AWS and GCP are copying all private data on servers?
Every computer copies data when moving it. Several times, in fact. Through network card buffers, switches, system memory, disk caches, and finally to some form of semi-permanent storage.
I don't have to think Amazon is stealing my data to be aware that Amazon S3 buckets containing privileged information are routinely found open. I don't have to think that Google is spying on me to know that operating equipment my business owns on prem and does not share requires me to trust fewer people and less complex systems than doing the same work from the cloud.
You are very quick to make foolish assumptions and assign them to others.
My understanding is that Abraham Lincoln literally had all the nation's telegraph lines routed through DC during the civil war, and AT&T has been an honorary branch of the US government ever since.
I disagree. HN discussions seem to have wildly liberal views of US copyright law and, in particular, fair use. Gamer's Nexus is surely commercial because they either make money (1) directly from YouTube, (2) directly from adverts / product placements, or (3) indirectly from merch.
I agree with the parent poster's point: "If news organizations can copy each other's clips of official speeches, who would bother going out and making such recordings?" When you see a head of state (or other VIP) making a speech and they show the media, there are normally 10+ different camera crews. If competitors can claim "fair use" for any of that footage, why would so many different media outlets send camera crews? The question answers itself.
A good counterpoint for fair use would be Wikipedia. They are very conservative about claiming fair use. I assume they have had pro bono (or not) lawyers review their policy and uses to confirm the strength of their claims. After hundreds of hours of reading Wiki, I can recall only once or twice ever seeing an artifact claim fair use. I think it was a severely downscaled photo of a no-longer-living person.
I think Wikipedia's relatively conservative (one might say erring on the side of safety) stance on free use is easy to understand when considering that they have a bank account stuffed to the brim with cash, minimal spend on hosting and developers compared to income and savings, and copyright lawsuits are one of very few of their exposed legal surfaces.
Additionally, folks don't like to rely on free use because the tests, though they have been well articulated, are inherently subjective and must be decided by a judge or jury. It's the sort of defense one wants to have available, but not depend on if possible, as a result.
Re: commercial use, in the US, just because a work is commercial does not automatically mean it loses fair use protection. Commerciality is only one factor of the four to be considered. Commercial parodies, for example, can still be fair use, especially where the work is transformative. IOW commerciality may weigh against fair use, but it is not dispositive. Google v Oracle involved fair use which was clearly commercial, for example.
GN's case would also be helped by the nature of the information being factual as opposed to artistic.
There are a lot of factors in whether or not an org can successfully take something to trial. Venue, judge, representation, jury selection, evidentiary rulings, all kinds of stuff. An imbalance in representation could easily swing it. So when I say that I think GN has a reasonable case, it's just me using the Supreme Court's rubric and some theoretical idealized court room which doesn't really exist. All I can say is that a good job could be done in arguing it. Whether or not GN could afford that work, or would want to, IDK.
Perhaps you should re-read what I wrote for comprehension. 50% of their spending may be on tech, but their total spending is only 4% of their income. Apparently I'm more familiar with their financial statements than you.
I think people misunderstand the 4 tests. They are not in-or-out tests. Commercial use doesn't mean it's not fair use. Each factor is weighed against others.
In this case this case the purpose is for critique or review and it justifies fair use since the clip is only a small part of the video, GN isn't in the same business as BB and isn't substitutive for BB's work, and the clip was a recording of a factual event and had didn't have a substantial creative element.
>Brother, wait until you learn about the associate press.
The same AP that licenses content to its members and charges non-members for the privilege of reusing their content?
"Many newspapers and broadcasters outside the United States are AP subscribers, paying a fee to use AP material without being contributing members of the cooperative. As part of their cooperative agreement with the AP, most member news organizations grant automatic permission for the AP to distribute their local news reports. "
> GN's use seems to satisfy all four factors.
It's weakest at #1 and #4.
#1: it's a commercial piece of work (so far as I can tell GN isn't a non-profit), and the use of the clip specifically isn't critical to the work. If you're critiquing a movie or something, and need to show a screengrab to get your point across, then that makes sense, but if the purpose of the video is just to establish "Trump said this", the video isn't really needed.
#4: see above regarding making recordings of official speeches.
Moreover I'm not trying to argue that GN is definitely not fair use, only that there's a plausible case otherwise. If there's actual disagreement over it's fair use or not, then the DMCA process is working as intended, and Bloomberg isn't abusing it as Louis implies.
Yeah yeah, everyone enforces their copyrights to the maximum extent possible. But this does not prevent massive amounts of both licensed copying and free use copying. The framework I outlined above is from the US Supreme Court's rulings on fair use so applies for everyone in the US.
[responses to edited-out portion of parent comment]
Re: #1, GN's work while commercial is an educational investigative journalism / documentary piece which are well established users of Free Use protection. GN's use is absolutely transformative.
#4: Bloomberg would have to prove a financial loss to have standing. That would mean that GN must have no other option than to use Bloomberg's clip, and pay the license, which I don't think would fly. GN would have just produced the segment differently.
With regard to whether or not a work is transformative, the Supreme Court’s formulation from Campbell v. Acuff-Rose, a case about parody, asks whether the new work merely supersedes the original, or instead adds something new, with a further purpose or different character, altering the first with new expression, meaning, or message.
A practical way to think about it is this:
What is the new use for?
Courts look first at whether the secondary use serves a different purpose from the original, not just whether it looks different. Uses for criticism, commentary, parody, scholarship, search/indexing, or other new functions often have a stronger transformative argument.
Is there new expression, meaning, or message?
That still matters, but after Warhol, a claimed new meaning by itself is usually not enough, especially when the secondary use is being exploited in a similar commercial market as the original. The Court emphasized that the inquiry is tied to the specific use at issue and whether that use has a distinct purpose.
Does it substitute for the original in the same market?
Even if the new work has some new meaning, it looks less transformative if it is serving basically the same licensing or audience function as the original. That overlaps with factor 4 as well.
How much was taken, and was that amount justified by the new purpose?
A use is more defensible when it takes only what is reasonably needed for the transformative aim. In parody, for example, some copying may be necessary to “conjure up” the original, but not more than needed.
All of which I think can fairly be evaluated in GN's favor. Though as you point out, the lawyers are paid to argue each point.
I've never received something other than what I've ordered. At worst the documentation is scant or missing entirely. Specifically with respect to motherboards, most of the aliexpress specials I've interacted with have had completely unlocked BIOSes. Which are easy to get yourself into trouble with, but kind of nice to have when you need them.
What Bloomberg proposed - sniffing the TTL signal between BMC and boot ROM and flipping a few bits in transit - is far from science fiction. It would be easy to implement in the smallest of microcontrollers using just a few lines of code: a ring buffer to store the last N bits observed, and a trigger for output upon observing the desired bits. 256 bytes of ROM/SRAM would probably be plenty. Appropriately tiny microcontrollers can also power themselves parasitically from the signal voltage as https://en.wikipedia.org/wiki/1-Wire chips do. SMBus is clocked from 10khz to 1mhz, assuming that's what the ROM was hanging off of, which is comfortably within the nyquist limit on an 8 - 20mhz micro.
Something similar has been done in many video game console mod chips. IIRC, some of the mod chips manage it on an encrypted bus (which Bloomberg's claims do not require).
"On PsNee, there are two separate mechanisms. One is the classic PS1 trick of watching the subchannel/Q data stream and injecting the SCEx symbols only when the drive is at the right place; the firmware literally tracks the read pattern with a hysteresis counter and then injects the authentication symbols on the fly. You can see the logic that watches the sector/subchannel pattern and then fires inject_SCEX(...) when the trigger condition is met.
PsNee also includes an optional PSone PAL BIOS patch mode which tells the installer to connect to the BIOS chip’s A18 and D2 pins, then waits for a specific A18 activity pattern and briefly drives D2 low for a few microseconds before releasing it back to high-impedance. That is not replacing the BIOS; it is timing a very short intervention onto the ROM data bus during fetch."
> why not just modify it instead of adding physically observable devices to mess with it?
Look to the video game mod chip industry for your answer. Consoles obsessively verify system integrity from boot ROM to game launch. Most firmwares and OSes are encrypted, signed, hashed. Flipping bits in transit and perhaps only at specific times like system power on allows for the ROM to be read, verified, and checksummed correctly without detection of the implant. This makes the implant not only persistent, but stealthy. Even pulling the ROM chip and replacing it with a different IC would not remove the implant. And if the injection point were chosen carefully, implant functionality may reasonably be expected to persist across ROM updates. This is exactly the case with the PSNee mod chip I mention above. If I had to wager a guess, it'd be because the target, like console makers, was known to update and verify ROMs, which is SOP is any large org.
In terms of being physically observable... barely. You'd need an X-ray to find such a thing buried between PCB layers or inside another component. And not only that, you'd need to be routinely X-raying all your incoming equipment and comparing all the images. And even if you dug the thing out, you'd get a few dozen bytes of ROM out of it with no clue about who made it or how. Perhaps you might be able to determine origin for the silicon based on doping ratios and narrow it down to a few facilities operating at the right feature size. How many of us, upon receiving new equipment, immediately disassemble it to bits, individually x-ray each, then re-assemble it? Not many.
It's not a dumb idea. And whether or not actual evidence exists, exploiting the firmware on the board management controller is exactly the place where you can poke with the least effort for the greatest reward. That alone makes the attack plausible. Honestly surprised we haven't seen a BMC worm yet.
reply