You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/postgresql/high-availability/concepts-high-availability.md
+34-7Lines changed: 34 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: This article describes high availability on an Azure Database for P
4
4
author: gaurikasar
5
5
ms.author: gkasar
6
6
ms.reviewer: maghan
7
-
ms.date: 03/23/2026
7
+
ms.date: 04/09/2026
8
8
ms.service: azure-database-postgresql
9
9
ms.subservice: high-availability
10
10
ms.topic: how-to
@@ -140,12 +140,6 @@ For a detailed guide on configuring and interpreting HA health statuses, see [Hi
140
140
141
141
- Planned events such as scale computing and scale storage happen on the standby first and then on the primary server. Currently, the server doesn't fail over for these planned operations.
142
142
143
-
- If you configure logical decoding or logical replication on an HA-enabled flexible server:
144
-
- In **PostgreSQL 16** and earlier, logical replication slots aren't preserved on the standby server after a failover by default.
145
-
- To ensure logical replication continues to function after failover, you need to enable the `pg_failover_slots` extension and configure supporting settings such as `hot_standby_feedback = on`.
146
-
- Starting with **PostgreSQL 17**, slot synchronization is supported natively. If you enable the correct PostgreSQL configurations (`sync_replication_slots`, `hot_standby_feedback`), logical replication slots are preserved automatically after failover, and no extension is required.
147
-
- For setup steps and prerequisites, refer to the [PG_Failover_Slots extension](../extensions/concepts-extensions-versions.md#pg_failover_slots) documentation.
148
-
149
143
- Configuring availability zones between private (virtual network) and public access with private endpoints isn't supported. You must configure availability zones within a virtual network (spanned across availability zones within a region) or public access with private endpoints.
150
144
151
145
- You can only configure availability zones within a single region. You can't configure availability zones across regions.
@@ -290,3 +284,36 @@ After a PostgreSQL failover, maintaining optimal database performance involves u
290
284
In contrast, the `pg_stat_*` views, provide runtime activity statistics such as the number of scans, tuples read, and updates, are stored in memory and reset upon failover. An example is `pg_stat_user_tables`, which tracks activity for user-defined tables. This reset accurately reflects the new primary's operational state but also means the loss of historical activity metrics that could inform the autovacuum process and other operational efficiencies.
291
285
292
286
Given this distinction, you may consider running `ANALYZE` after a PostgreSQL failover. This action updates the `pg_stat_*` data (e.g., `pg_stat_user_tables`) with fresh vacuum activity statistics, helping the autovacuum process, which in turn, ensures that the database performance remains optimal in its new role. This proactive step bridges the gap between preserving essential optimizer statistics and refreshing activity metrics to align with the database's current state.
287
+
288
+
## Logical replication support with HA
289
+
When using logical replication or logical decoding with High Availability (HA) in Azure Database for PostgreSQL Flexible Server, it’s important to understand how replication slots behave during failover and how to ensure continuity of replication.
290
+
291
+
### PostgreSQL 16 and earlier
292
+
In PostgreSQL 16 and earlier, logical replication slots aren't automatically preserved on the standby server after a failover. To maintain logical replication across failover, you must:
293
+
- Enable the `pg_failover_slots` extension
294
+
- Configure required settings such as `hot_standby_feedback = on`
295
+
296
+
Without these configurations, logical replication might stop working after a failover because replication slots aren't available on the new primary.
297
+
298
+
### PostgreSQL 17 and later
299
+
Starting with PostgreSQL 17, logical replication slot synchronization is supported natively. When correctly configured, replication slots are automatically synchronized to the standby server.
300
+
To enable this behavior:
301
+
- Set `sync_replication_slots = on`
302
+
- Set `hot_standby_feedback = on`
303
+
304
+
With these settings, logical replication slots are preserved during failover, and replication can continue without requiring extensions. For setup steps and prerequisites, refer to the [PG_Failover_Slots extension](../extensions/concepts-extensions-versions.md#pg_failover_slots) documentation.
305
+
306
+
### Important considerations
307
+
- Logical replication slots are managed on the primary server, but **must also exist** on the standby to ensure logical works post HA failover.
308
+
- System views (for example, querying `pg_replication_slots`) only show the state on the primary and don't confirm whether slots are synchronized to the standby. A system can appear healthy on the primary but still not be failover-ready.
To help validate failover readiness, you can use the Azure Monitor metric `logical_replication_slot_sync_status` (Preview).
312
+
This metric indicates whether logical replication slots are synchronized across the HA primary and standby:
313
+
-`1` indicates that slots are synchronized across primary and standby.
314
+
-`0` indicates that slots aren't synchronized on the standby.
315
+
316
+
If the metric value is 0, logical replication might continue to function on the current primary, but it might not continue after a failover. For more details, refer to the [Logical replication monitoring](../monitor/concepts-monitoring.md#logical-replication).
317
+
318
+
> [!NOTE]
319
+
> This synchronization state reflects the status across HA nodes and can't be verified using system views on the primary server alone. Consider using this metric with alerts to detect when logical replication isn't failover-ready, especially before planned maintenance or failover events.
0 commit comments