Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A2A isn't a layer of abstraction over MCP, it functions in parallel and they complement each other. MCP addresses the Agent-to-Environment question, how can Agents "do things" on computers. A2A addresses the Agent-to-Agent question, how can Agents learn about other Agents and communicate with them. You need both.

You CAN try and build "the one agent that does everything" but in scenarios where there's many simultaneous data streams, a better approach would be to have many stateful agents handling each stream via MCP, coupled with a single "executive" agent that calls on each of the stateful agents via A2A to get the high-level info it needs to make decisions on behalf of its user.



What is an "agent"?

To my understanding of this protocol it looks like it's an entity exposing a set of capabilities. Why is that different and complementary to an MCP server exposing tools? Why would you be limited to an "everything agent" in MCP?

I am struggling to see the core problem that this protocol addresses.


Much debated question but if we run with your definition, then A2A adds communication capabilities alongside tool-calling, which is ultimately a set of programmatic hooks. Like "phone a friend" if you don't know the answer given what you have available directly (via MCP, training data, or context).

My assumption is that the initial A2A implementation will be done with MCP, so the LLM can ask your AI directory or marketplace for help with a task via some kind of "phone a friend" tool call, and it'll be able to immediately interop and get the info it needs to complete the task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: