I can’t decide if this really is the biggest problem with CQS. Certainly the wiki page claims it is, and it’s a reasonable argument. For some simpler cases you could dodge it by wrapping the function pairs/tuples in a lock. Database calls are a bit sketchy, because a transaction only “fixes” the problem if you ignore the elephant in the room which is reduced system parallelism by a measurable amount because even in an MVCC database transactions aren’t free. They’re just cheaper.
Caches always mess up computational models because they turn all reads into writes. Which makes things you could say with static analysis no longer true. I know a lot of tricks for making systems faster and I’ve hardly ever seen anyone apply most of them to systems after caching was introduced. It has one upside and dozens of downsides as bad or worse than this one.
One of the big benefits of CQRS is that everything becomes asynchronous, and you can handle write-heavy data with stream processing. Implementing distributed locks across your stream processing system is... unpleasant to contemplate.
If you really need locks, that generally locks you out of this kind of architecture, which makes the CQRS value proposition much flimsier.
Caches always mess up computational models because they turn all reads into writes. Which makes things you could say with static analysis no longer true. I know a lot of tricks for making systems faster and I’ve hardly ever seen anyone apply most of them to systems after caching was introduced. It has one upside and dozens of downsides as bad or worse than this one.