Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unlike many of those approaches which concern themselves with delivery of human-designed static UI, this seems to be a tool designed to support generative UIs. I personally think that's a non-starter and much prefer the more incremental "let the agent call a tool that renders a specific pre-made UI" approach of MCP UI/Apps, OpenAI Apps SDK, etc for now.


Legitimate curiosity - why?

Making an agent call a tool to manipulate a UI does feel like normal application development and an event driven interaction... I get that.

What else drives your preference?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: