Recently, I attended a technology architecture event where over 20 tech managers from city commercial banks were invited. A manager from a major bank shared practical cases of distributed core system transformation. The presentation was excellent and packed with valuable insights!
During the Q&A session, a tech manager from a city commercial bank asked: "For financial institutions like ours, which include city commercial banks and rural credit cooperatives, would you recommend a distributed or centralized architecture for the next-generation core transaction system?"
The speaker answered: "Based on our practical experience, our principle is to avoid the distributed architecture whenever possible. If a centralized architecture can solve problems, then we should definitely use a centralized one. This is because we have a design principle to try to avoid distributed transactions. Cross-database transactions involve a large number of transaction nodes, resulting in high latency and complexity. In essence, we are using a grid-based centralized architecture, which is divided into multiple cells or shards. Each cell operates a centralized architecture. For city commercial banks with smaller transaction volumes and core database capacities, a centralized architecture is more suitable, providing sufficient capacity and reliability. At the application layer, we use virtualization, containerization, and microservices to boost system flexibility and agility, and to enable rapid iteration."
Look at this successful practice from a major bank. In fact, there is no absolute superiority between architectures; it's essentially a matter of replacing the XC technology stack. Both centralized and distributed systems can meet the requirements, and their effectiveness depends on the specific context. Extreme adherence to the distributed architecture is not practical. However, some vendors advocate for a revolutionary shift to distributed architectures, claiming to overturn the classic architecture. But what is the cost of such a revolution? Are there no sacrifices involved? Who bears the cost—vendors or users? Blindly embracing the so-called distributed architecture while dismissing the classic centralized architecture requires careful consideration.
Currently, the so-called distributed architecture is essentially a multi-copy primary/secondary architecture, which does not truly achieve multi-node load balancing.
As the speaker mentioned, a pragmatic approach is needed. Based on factors such as business pressure, architectural planning capabilities, O&M skills, and IT investment, it is essential to choose a suitable XC technology architecture. For small- and medium-sized financial institutions with peak transaction volumes of 200, 300, or even several thousand TPS, is it truly necessary to build a distributed architecture featuring "five DCs in three cities" setups to deal with tens of thousands or hundreds of thousands of transaction peaks? Can their developers and O&M engineers be able to manage it effectively? Will they be tied to certain vendors? Can they afford it?
According to many customers, over 90% of system architecture modifications can be addressed by a centralized approach, and only a small portion may require a distributed transformation or several transaction sub-nodes. The securities industry has been implementing this approach for over a decade, where the centralized trading system is managed in this way. It's similar to managing a team of more than 50 people by splitting it into several smaller groups (with more refined names like "shards" or "cells") for more precise management.
Uphold fundamental principles while breaking new ground because impractical or pseudo-innovations will only lead to significant risks and losses. We mu
By BIXUSHUO