Welcome to my blog series. In this blog, I will summarise the streaming replication in PostgreSQL.
PostgreSQL's streaming replication, introduced in version 9.1, enables single-master-multi-slave replication using log shipping. This process involves three essential processes: the main server utilizes walsender, the backup server uses walreceiver, and the standby server employs startup. Communication between walsender and walreceiver is established via a TCP connection, facilitating the smooth and efficient data transfer for replication.
PostgreSQL's streaming replication process starts with the standby server initiating the startup process, followed by the walreceiver procedure. Upon receiving a connection request from the walreceiver, the primary server launches a walsender process and establishes a TCP connection between them. During handshaking, the walreceiver shares the latest Log Sequence Number (LSN) of the standby's database cluster.
If the standby's LSN lags behind the primary's LSN, the walsender transmits the relevant Write-Ahead Log (WAL) data to the walreceiver for catch-up transmission. Once the standby server catches up, streaming replication commences, keeping it in sync with the primary server.
The primary server manages multiple standby servers using sync_priority and sync_state. The sync_priority value indicates the priority of each standby server, while the sync_state value denotes its status (sync, potential, or async). To ensure consistency and synchronization, the primary server waits for ACK responses from synchronous standbys.
Failures in backup servers are detected using mechanisms that encompass dropped connections, network issues, and unresponsiveness, all governed by customizable timeouts.
By understanding the launch process, communication, and management of multiple backup servers, PostgreSQL users can effectively establish and maintain streaming replication for high availability and data synchronization. This robust approach ensures data consistency and resilience in the event of primary server failures.
Top comments (0)