Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been experimenting with using various LLMs as a game master for a Lovecraft-inspired role-playing game (not baked into an application, just text-based by prompting). While the LLMs can generate scenarios that fit the theme, they tend to be very generic. I've also noticed that the models are extremely susceptible to suggestion. For example, in one scenario, my investigator was in a bar, and when I commented to another patron, 'Hey, doesn't the barkeeper look a little strange?', the LLM immediately seized on that and turned the barkeeper into an evil, otherworldly creature. This behavior was consistent across all the models I tested. Maybe by prompting the LLM to fully plan the scenario in advance and then adhere to that plan would mitigate the behavior but I haven't tried it. It was just an experiment and I actually had a lot of fun with the behavior. Also the reactions of the LLM if the player does something really unexpected (e.g. "the investigator pulls a sausage out of his pocket and forcefully sticks it into the angry sailors mouth") are sometimes hillarious.


> on, 'Hey, doesn't the barkeeper look a little strange?', the LLM immediately seized on that and turned the barkeeper into an evil, otherworldly creature.

Though making it an evil otherwordly creature is a bit extreme, it's at least similar to what a flexible GM can do. In my DMing days, I would often develop new paths that integrated into the whole inspired by things my players noticed/suspected.


In my GM days, I had a lot of trouble with players that tried really their best to completely leave the path I prepared for them.

You are right though and it's not that I completely dislike the LLMs "flexibilty" and openness to suggestions. However, it's also super easy to use it for "cheating". E.g. it generated a scenario with an evil entity about to attack me and some friendly NPC and I could "solve" that problem by telling the NPC "remember the device I gave you last week and told you to always keep on hand? pull the trigger now!" (that never happend, at least to the LLMs knowledge) and the LLM made up some device that shot a beam of magic light at the creature and stopped it.


Have you used any thinking models? I remember being surprised by QwQ-32B when I tried it. It would think about what I said and how it should respond, reiterate the behaviors I had assigned to it, and respond accordingly. That constant self-reinforcement in the thinking phase seemed to keep it on track.


I might have tried DeepSeek-r1-8B in the past but I'll certainly try again. Good idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: