distributed-computing,cap-theorem
It's the partition tolerance, that you got wrong. As long as there isn't any partitioning happening, systems can be consistent and available. There are CA systems which say, we don't care about partitions. You can have them running inside racks with server hardware and make partitioning extremely unlikely. The problem...
Apache Accumulo is based on the Google BigTable paper, and shares a lot of similarities with Apache HBase. All three of these systems are intended to be CP, where nodes will simply go down rather than serve inconsistent data.
You may need two actors: one (coordinator) will send notifications about chat commands to clients. Another (throttler) - will push data to the database every 2 minutes. Your queue will be just an internal state of throttler: class Coordinator extends Actor { def receive = { case command: Record =>...
distributed,leveldb,cap-theorem,rocksdb
LevelDB and RocksDB are not distributed databases. Hence, Brewer's CAP Theorem is inapplicable.
hadoop,cap-theorem,availability
HDFS does not provide Availability in case of multiple correlated failures (for instance, three failed data nodes with the same HDFS block). From CAP Confusion: Problems with partition tolerance Systems such as ZooKeeper are explicitly sequentially consistent because there are few enough nodes in a cluster that the cost of...
clojure,zookeeper,watch,stm,cap-theorem
This seems to be a limitation in the way ZooKeeper implements watches, not a limitation of the CAP theorem. There is an open feature request to add continuous watch to ZooKeeper: https://issues.apache.org/jira/browse/ZOOKEEPER-1416. etcd has a watch function that uses long polling. The limitation here which you need to account for...