Vacuum Stats & WAL Activity
This section of the AWR report covers two critical operational dimensions: VACUUM activity (essential for PostgreSQL MVCC health) and WAL (Write-Ahead Log) generation (critical for write performance, replication and recovery).
Vacuum Statistics
{screenshot: awr-vacuum-stats-section}
Why Vacuum Matters
PostgreSQL's MVCC architecture requires regular VACUUM operations to:
- Reclaim dead tuples (bloat removal)
- Update visibility maps (essential for index-only scans)
- Prevent transaction ID wraparound — a critical failure condition if ignored
Vacuum Activity Table
The AWR report shows vacuum activity per table during the report period, derived from pg_stat_user_tables deltas:
| Column | Description |
|---|---|
| Table | Schema and table name |
| Auto Vacuums | Number of autovacuum runs during the period |
| Manual Vacuums | Number of manually triggered VACUUM runs |
| Auto Analyzes | Number of autoanalyze runs |
| Manual Analyzes | Number of manually triggered ANALYZE runs |
| Live Tuples | Estimated live row count at end snapshot |
| Dead Tuples | Estimated dead tuple count at end snapshot |
| Last Autovacuum | Timestamp of the most recent autovacuum run |
| Last Autoanalyze | Timestamp of the most recent autoanalyze run |
{screenshot: awr-vacuum-table}
Interpreting Vacuum Statistics
Tables with no recent autovacuum → May have growing bloat or stale statistics. Check whether autovacuum thresholds are configured appropriately.
High dead tuple counts → Autovacuum may not be running frequently enough for high-churn tables. Consider lowering autovacuum_vacuum_scale_factor for those tables.
Frequent manual vacuums → May indicate autovacuum is insufficient. Review autovacuum configuration or schedule maintenance.
Autovacuum Configuration Tips
For high-churn tables, override autovacuum settings at the table level:
ALTER TABLE high_churn_table SET (
autovacuum_vacuum_scale_factor = 0.01,
autovacuum_analyze_scale_factor = 0.005
);
WAL Activity
{screenshot: awr-wal-activity-section}
Why WAL Matters
The Write-Ahead Log is PostgreSQL's durability mechanism. Every data change is written to WAL before being applied to data files. WAL volume directly impacts:
- Disk I/O — high WAL generation saturates the WAL disk
- Replication lag — WAL must be shipped to standbys
- Recovery time — more WAL = longer crash recovery
- Checkpoint pressure — frequent checkpoints increase I/O load
WAL Statistics Table
| Metric | Description |
|---|---|
| WAL Generated (bytes) | Total WAL volume during the report period |
| WAL Generated / sec | Average WAL generation rate |
| WAL Records | Number of WAL records written |
| Full Page Writes | Number of full page images written to WAL |
| WAL Buffers Full | Number of times WAL buffers were flushed early |
{screenshot: awr-wal-stats-table}
Checkpoint Statistics
| Metric | Description |
|---|---|
| Checkpoints Timed | Checkpoints triggered by checkpoint_timeout |
| Checkpoints Requested | Checkpoints triggered by WAL volume (max_wal_size) |
| Checkpoint Write Time | Total time spent writing dirty buffers during checkpoints |
| Checkpoint Sync Time | Total time spent syncing files to disk |
| Buffers Written (Checkpoint) | Dirty buffers flushed by checkpoints |
| Buffers Written (Background) | Dirty buffers flushed by bgwriter |
{screenshot: awr-checkpoint-stats}
Interpreting WAL and Checkpoint Data
High Checkpoints Requested vs. Timed → WAL is being generated faster than checkpoint_timeout allows. Increase max_wal_size to reduce checkpoint frequency.
High Checkpoint Write Time → Checkpoint I/O is taking too long — may indicate slow disk or insufficient checkpoint_completion_target. Set checkpoint_completion_target = 0.9 to spread checkpoint writes.
High Full Page Writes → Many pages are being written for the first time after a checkpoint. This is normal after frequent checkpoints; reducing checkpoint frequency also reduces FPW overhead.