Introduction: The Nightmare of Double Withdrawals, Concurrency Issues
Imagine pressing the transfer button twice simultaneously in a banking app, resulting in the balance decreasing only once, but the money being sent twice. Concurrency Control in databases is not just a theory, but a survival issue directly linked to business reliability. Especially in high-traffic ticketing services or real-time inventory management systems, data entanglement caused by Interleaving is fatal. This post goes beyond textbook definitions to deeply analyze concurrency problem patterns in real large-scale traffic environments and modern architectural strategies (MVCC, Distributed Locks) to solve them.
Deepening Core Principles: Interleaving and Lost Updates
Interleaving is a technique where the CPU processes multiple transactions alternately, making them appear to run simultaneously. However, this process leads to a fatal Lost Update problem.
The Trap Scenario of Interleaving
For example, assume A and B try to buy a product with only 1 item left in stock with a 0.001-second difference.
- A reads stock (Stock: 1)
- B reads stock (Stock: 1) -- Problem Point
- A decreases stock by 1 and saves (Stock: 0)
- B decreases stock by 1 and saves (Stock: 0)
Permutations and Serializability
As transactions become more complex, the number of possible interleaving permutations increases exponentially ($(n \times m)! / (m!)^n$). Databases must only allow schedules that produce results equivalent to executing transactions sequentially (Serializable). Enforcing this is the role of Locking and Timestamp Ordering techniques.
2025 Trend: The Era of Lock-Free Concurrency Control
Traditional Locking (2PL) was a primary culprit for performance degradation due to long wait times. The database trend in 2025 focuses on "maintaining consistency without locking."
MVCC (Multi-Version Concurrency Control)
MySQL (InnoDB) and PostgreSQL use MVCC to allow read operations to access consistent data without locking (Non-locking Consistent Read). Instead of overwriting data, it creates new versions so that readers and writers do not block each other. This has become the standard for high-performance web services.
MSA Environment and Distributed Locks
While DB locks were sufficient in monolithic environments, in Microservices (MSA) environments, multiple servers access the same resource simultaneously. In such cases, Distributed Locks using Redis (Redisson) or Zookeeper must be implemented to control concurrency at the application level.
Practical Application: Pessimistic Lock vs. Optimistic Lock
Field engineers must choose a locking strategy based on the characteristics of the business logic.
- Pessimistic Lock: Assumes "collisions will be frequent" and locks the DB in advance (
SELECT ... FOR UPDATE). Suitable for financial transactions like inventory deduction or point usage where data integrity is paramount. - Optimistic Lock: Assumes "collisions will be rare" and proceeds without locking, checking the version at the time of saving. Advantageous for services where performance is key and collision frequency is low, such as editing posts or writing comments. Implementing Retry logic at the application level is mandatory.
Expert Insight
💡 Backend Architect's Note
Tip for Tech Adoption: Unconditional use of locks is a shortcut to Deadlocks. Minimize the scope of transactions and maintain a consistent lock acquisition order. Especially when high performance is needed, consider using Redis Atomic operations (INCR, DECR) to manage inventory instead of DB locks, and reflect changes to the DB asynchronously (Write-Behind strategy).
Future Outlook: In the future, Serverless Transaction Management features provided by cloud-native DBs (like Aurora) will advance, reducing the burden on developers to manage locks manually. However, understanding the underlying theory of Serializability remains crucial.
Conclusion: Walking the Tightrope Between Performance and Stability
Concurrency control is the art of coordinating the Trade-off between system Throughput and Data Consistency. Understanding the principles of interleaving and having the ability to place Pessimistic, Optimistic, and Distributed locks in the right places according to business situations is the core competency of a backend engineer. Even in the complex distributed environments of 2025, the basic principles remain unchanged. Constantly ponder to uphold the principle that 'Data must not lie.'