Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Reddit, but with multiple LLM agents, works locally
1 point by huydotnet 25 days ago | hide | past | favorite | 1 comment
This is a project I created for fun: https://redditwithagents.vercel.app/

screenshot: https://i.imgur.com/JFMFBNF.png

It's basically a web app that mimic parts of Reddit's UI, allowing you to discuss with LLM agents right in the browswer.

All of the LLM API calls happen in the browser as the app does not have a backend. You can also config the app to use your local LLM APIs as well.

For example, to use LM Studio, make sure you serve the model locally and checked the two options: "Enable CORS" and "Serve on Local Network"

here's what it look like: https://i.imgur.com/TfzIjl4.png

Then go to the app's settings page, set the following configs:

    API URL: http://192.168.<whatever>.<your>:1234/v1
    API Key: whatever-key-you-set
    Model: soemthing like openai/gpt-oss-20b
You can also check the source code here https://github.com/huytd/reddit-with-agents/


[dead]


Thank! I've been trying with a conversation with 20, 30 comment threads, with about 5 replies per thread, so far so good.

I heard in Chrome, there's a gemini nano model built-in as well, maybe this is a good example to integrate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: