Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In asynchronous environments, you may not be able to repeat the same query with the same result (unless you control a cache of results, which has its own issues). If some condition is determined by the command’s implementation that subsequent code is interested in (a condition that isn’t preventing the command from succeeding), it’s generally more robust for the command to return that information to the caller, who then can make use of it. But now the command is also a query.


I can’t decide if this really is the biggest problem with CQS. Certainly the wiki page claims it is, and it’s a reasonable argument. For some simpler cases you could dodge it by wrapping the function pairs/tuples in a lock. Database calls are a bit sketchy, because a transaction only “fixes” the problem if you ignore the elephant in the room which is reduced system parallelism by a measurable amount because even in an MVCC database transactions aren’t free. They’re just cheaper.

Caches always mess up computational models because they turn all reads into writes. Which makes things you could say with static analysis no longer true. I know a lot of tricks for making systems faster and I’ve hardly ever seen anyone apply most of them to systems after caching was introduced. It has one upside and dozens of downsides as bad or worse than this one.


One of the big benefits of CQRS is that everything becomes asynchronous, and you can handle write-heavy data with stream processing. Implementing distributed locks across your stream processing system is... unpleasant to contemplate.

If you really need locks, that generally locks you out of this kind of architecture, which makes the CQRS value proposition much flimsier.


> it’s generally more robust for the command to return that information to the caller, who then can make use of it. But now the command is also a query.

You don't need the command to return anything (though it can be more efficient or convenient). It can set state indicating, "Hey, I was called but by the time I tried to do the thing the world and had changed and I couldn't. Try using a lock next time."

  if (query(?)) {
    command(x)
    result := status(x) //  ShouldHaveUsedALockError
  }
The caller can still obtain a result following the command, though it does mean the caller now has to explicitly retrieve a status rather than getting it in the return value.


Where is that state stored, in an environment where the same command could be executed with the same parameters but resulting in a different status, possibly in parallel? How do you connect the particular command execution with the particular resulting status? And if you manage to do so, what is actually won over the command just returning the status?

I’d argue that the separation makes things worse here, because it creates additional hidden state.

Also, as I stated, this is not about error handling.


CQRS should really only guide you to designing separate query and command interfaces. If your processing is asynchronous then you have no choice but to have state about processing-in-flight, and your commands should return an acknowledgement of successful receipt of valid commands with a unique identifier for querying progress or results. If your processing is synchronous make your life easier by just returning the result. Purity of CQRS void-only commands is presentation fodder, not practicality.

(One might argue that all RPC is asynchronous; all such arguments eventually lead to message buses, at-least-once delivery, and the reply-queue pattern, but maybe that's also just presentation fodder.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: