You can’t avoid the CAP theorem, but you can isolate its complexity and prevent it from sabotaging your ability to reason about your systems. The complexity caused by the CAP theorem is a symptom of fundamental problems in how we approach building data systems. Two problems stand out in particular: the use of mutable state in databases and the use of incremental algorithms to update that state. It is the interaction between these problems and the CAP theorem that causes complexity.
In this post I’ll show the design of a system that beats the CAP theorem by preventing the complexity it normally causes. But I won’t stop there. The CAP theorem is a result about the degree to which data systems can be fault-tolerant to machine failure. Yet there’s a form of fault-tolerance that’s much more important than machine fault-tolerance: human fault-tolerance. If there’s any certainty in software development, it’s that developers aren’t perfect and bugs will inevitably reach production. Our data systems must be resilient to buggy programs that write bad data, and the system I’m going to show is as human fault-tolerant as you can get.
This post is going to challenge your basic assumptions on how data systems should be built. But by breaking down our current ways of thinking and re-imagining how data systems should be built, what emerges is an architecture more elegant, scalable, and robust than you ever thought possible.