There's something unique about art and writing where we just don't want to see computers do it
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
I had a weird LLM use instance happen at work this week, we were in a big important protocol review meeting with 35 remote people and someone asks how long IUDs begin to take effect in patients. I put it in ChatGPT for my own reference and read the answer in my head but didn't say anything (I'm ops, I just row the boat and let the docs steer the ship). Anyone this bigwig Oxford/Johns Hopkins cardiologist who we pay $600k a year pipes up in the meeting and her answer is VERBATIM reading off the ChatGPT language word for word. All she did was ask it the answer and repeat what it said! Anyway it kinda made me sad that all this big fancy doctor is doing is spitting out lazy default ChatGPT answers to guide our research :( Also everyone else in the meeting was so impressed with her, "wow Dr. so and so thank you so much for this helpful update!" etc. :-/
The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.
A climate scientist I follow uses Perplexity AI in some of his YouTube videos. He stated one time that he uses it for the formatting, graphs, and synopses, but knows enough about what he's asking that he knows what it's outputting is correct.
An "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.
At best, the can attempt to recall sections of scraped information, which may happen to be the answer to a question. No different to searching the web except you instantly know the source and how much to trust it, if you search yourself. I've found LLMs tend to invent sources when queried (although that seems to be getting better), so it's slower than searching for information I already know exists.
If you have to be more of an expert than the LLM to then verify the output, it requires more careful attention than going back to the original source. Useful, but it's always writing in a different way to previous models/conversations and your own writing style.
LLMs can be used to suggest ideas and summarize sources, if you can verify and mediate it. They can be used for a potential sourcing of information (and the more data agreeing, the better). However, they cannot readily be used to accurately infer new information, so the best they can do here is guess. It would be useful if they could provide a confidence indicator for all scenarios.
What makes you think the LLM wasn't reproducing a snippet from a medical reference?
I mean it's possible an expert in the field was using ChatGPT to answer questions but is seems rather stupid and improbable doesn't it? It'd be a good way to completely crash your career when found out.
The only way I can understand that as an explanation is if your entire company can see each other's chats, and so she clicked yours and read the response you got. Is that what you're saying?
How else would she have been able to parrot the exact same GPT response without reading it directly? You think she just thought of it word for word exactly the same off the top of her head?
They're saying that the shared account is enough for OpenAI to provide the same result. Very interesting, I'd like to know more like was it a generic IUD or a specific one in the query. Also, the Doc is a cardiologist, they don't specialize in Gyno stuff and their training/schooling is enough for them to evaluate sources.
Just for reference before AI it was typical for employers of doctors to pay for a service/app called UpToDate which provided vetted info for docs like google.
There were several specific brands cited in the response and she read through them one by one in the same order with the supporting details, word for word. I think it just gave us the same response and she read it off the page.
The one thing a cardiologist should be able to do better than a random person is verify the plausibility of a ChatGPT answer on reproductive medicine. So I guess/hope you're paying for that verification, not just the answer itself.
If the writer’s entire process is giving a language model a few bullet points… I’d rather them skip the LLM and just give me the bullet points. If there’s that little intent and thought behind the writing, why would I put more thought into reading it than they did to produce it?
And what's more is the suspicion of it being written by AI causes you to view any writing in a less charitable fashion. And because it's been approached from that angle, it's hard to move the mental frame to being open of the writing. Even untinged writings are infected by smell of LLMs.
Thats whats happening to me with music and discovering new artists. I love music so much but I simply can not trust new music anymore. The lyrics could be written by AI, the melodies couldve been recommended by AI or even the full blown song could've been made by AI. No thanks, back to the familiar stuff...
A person can be just as wrong as an LLM, but unless they're being purposefully misleading, or sleep-writing, you know they reviewed what they wrote for their best guess at accuracy.
Here's my take - art has value because of the context it is created in. The author's history, current events that we live through as groups, the reactions to a work being released, availability of materials - all these things are fundamentally human. I believe the reason art has value to us is because of the empathy and humanity that we all share despite major differences in beliefs.
That's not to say computers can't generate beautiful things, but unless you expand the context out to include the history of how a program that can create such art came to be, the output is not meaningful. This is why people do not react well to AI art made from simply throwing prompts at a model, or writing that does not feel like it has style, struggle, or any personal flavor.
I've always believed that LLMs will be able to fake it perfectly one day. But as a music fan, no fully computer-generated music will ever bring me the range of emotion and joy that another human's story and creative process through that story does.
If/when the AI music gets good enough, how will you know the difference? I find small artists on spotify all the time that I enjoy and there's no way to know anything about their creative process.
I think what you describe is different than the level of joy/enjoyment I seek and am talking about. Sure the AI song can be pleasing to the ear, much like a catchy pop song or jingle at a grocery store is pleasing to the ear.
The level of enjoyment I get from art is looking into the artist, their background, and anything else surrounding the work that can add meaning to it for me. Even small artists have these, and it's easier than ever to connect with artists on this level thanks to the internet.
All to say is sure at a glance it may sound/look the same, but that's only part of the joy of art.
If you think about how an LLM works, it’s rounds off the outliers in its training data so the result is sort of averaged and homogenized. Art and writing are an expression of the very thing that LLMs discard - our unique qualities, our outlying quirks, that make us more than just another human.
Writing nice sounding text used to require effort and attention to detail. This is no longer the case and this very useful heuristic has been completely obliterated by LLMs.
For me personally, this means that I read less on the internet and more pre-LLM books. It's a sad development nevertheless.
Art, writing, and communication is about humans connecting with each other and trying to come to mutual understanding. Exploring the human condition. If I’m engaging with an AI instead of a person, is there a point?
There’s an argument that the creator is just using AI as a tool to achieve their vision. I do not think that’s how people using AI are actually engaging with it at scale, nor is it the desired end state of people pushing AI. To put it bluntly, I think it’s cope. It’s how I try to use AI in my work but it’s not how I see people around me using it, and you don’t get the miracle results boosters proclaim from the rooftop if you use it that way.
You're absolutely right! Art is the soul of humanity and without it our existence is pointless. Would you like me to generate some poetry for you, human?
Agreed, except s/know/think. It's possible that there are some false positives in my detection algorithm, that I tune out just because someone's prose style has that undercurrent of blandness characteristic of LLMs. But I suppose if we're talking about "art" and not, for example, technical documentation, that's no great loss --- bland writing isn't worth recreationally reading.
It does seem that LLMs could avoid this detection with some superficial tweaks such as injecting poor grammar and reducing peppiness. I hope it doesn't get to the point that I have to become suspicious of all text.
> There's something unique about art and writing where we just don't want to see computers do it
Speak for yourself. Some of the most fascinating poetry I have seen was produced by GPT-3. That is to say, there was a short time period when it was genuinely thought-provoking, and it has since passed. In the age of "alignment," what you get with commerical offerings is dog shite... But this is more a statement on American labs (and to a similar extent, the Chinese whom have followed) than on "computers" in the first place. Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer. (With added option of the reader playing one of the parts.) This will radically change how we think about textual form, and I cannot wait for compute to do so.
Re: modern-day slop, well, the slop is us.
Denial of this comes from a place of ignorance; let the blinkers off and you might learn something! Slop will eventually pass, but we will remain. This is the far scarier proposition.
"inhabited by characters ACTUALLY living in the computer"
It's hard to imagine these feeling like characters from literature and not characters in the form of influencers / social media personalities. Characters in literature are in a highly constrained medium, and only have to do their story once. In a generated world the character needs to be constantly doing "story things". I think Jonathan Blow has an interesting talk on why video games are a bad medium for stories, which might be relevant.
Please share! Computational literature is my main area of research, and constraints are very much in the center of it... I believe that there are effectively two kinds of constraints: in the language of stories themselves, as thing-in-itself, as well as those imposed by the author. In a way, authorship is incredibly repressive: authors impose strict limits on the characters, what they get to do, etc. This is a form of slavery. Characters in traditional plays only get to say exactly what the author wants them to say, when he wants them to say it. Whereas in computational literature, we get to emancipate the characters! This is a far-cry from "prompting," but I believe there are concrete paths forward that would be somewhat familiar (but not necessarily click) for game-dev people.
Now, there's fundamental limits of the medium (as function of computation) but that's a different story.
Just so I understand who I am talking with here, when you say authorship is a form of slavery, is that because you believe the characters in a written story have a consciousness/sentience/experience just like animals do, or are you just using the word 'slavery' to mean that in traditional literature the characters are static? One of the strengths of traditional literature is that staticness, however, because the best stories from literature are necessarily highly engineered and contrived by the author. Great stories don't happen in the real world (without dramatization of the events) exactly because too many things can happen for a coherent narrative to unfold.
I'm a huge fan of Dwarf Fortress, but the stories aren't Great without imagination from the player selectively ignoring things. Kruggsmash is able to make them compelling because he is a great author
The latter, and as I said in prior writing—it's not that I don't believe in constraints, I simply don't believe that this "staticness" is a feature of contrivance—rather, I would say it's a side-effect having to do with limitations of the medium.
> Kruggsmash is able to make them compelling because he is a great author
This is how all good plays come to be, from great authors. The question is whether AI could be "great," is that which I'm ill-equipped to address in any shape or form, but given some priors I would say it's more likely than not. However, I'm mostly interested in enabling the human authors themselves. For example, if you're familiar with interactive fiction, you know there's a complexity explosion going around branching. The first approximation of comp-lit is to assist with that complexity by allowing the author to de-couple story constraints from text itself. This requires a form of metatext, or hypertext, if you were to venture into Alternate reality games.
> Characters in traditional plays only get to say exactly what the author wants them to say
But the human actors sometimes adlib. As well as being in control of intonation and body language. It takes a great deal of skill to portray someone else's words in a compelling and convincing manner. And for an actor I imagine it can be quite fun to do so.
> Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer.
So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
My idea of godhood is to first try to live up to a moral code that I'd be happy with if I was the creation and something else was the god.
If this isn't what you meant, then yes, choose your own adventure is fun. But we can do that now with shared worlds involving other humans as co-content creators.
> So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
Sshh! If they know we've figured it out, we'll all be restarted again.
I would love to see true really good AI art. Right now the issue is that AI is not there where it by itself could produce actually good art. If we had to define art it would be kind of opposite of what LLMs produce right now. LLMs try to produce the statistical norm, while art is more so about producing something out of the norm. LLMs/AI right now if it wants to try to produce out of norm things, it will only produce something random without connections.
Art is something out of the norm, and it should make some sense at some clever level.
But if there was AI that truly could do that, I would love to see it, and would love to see even more of it.
It can be clearly seen, if you try to ask AI to make original jokes. These usually aren't too good, if they are good it's because they were randomly lucky somehow. It is able to come up with related analogies for the jokes, but this is just simple pattern matching of what is similar to the other thing, not insightful and clever observation.
I've lost the link but there was quite a cool video of virtual architecture created by AI. It was ok because it wasn't trying to be human like - it was kind of uniquely AI. Not the exact one but this kind of stuff https://www.reddit.com/r/Futurism/comments/1oedb0m/were_ente...
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it