Skip to main content

Vacuum Stats & WAL Activity

This section of the AWR report covers two critical operational dimensions: VACUUM activity (essential for PostgreSQL MVCC health) and WAL (Write-Ahead Log) generation (critical for write performance, replication and recovery).


Vacuum Statistics

{screenshot: awr-vacuum-stats-section}

Why Vacuum Matters

PostgreSQL's MVCC architecture requires regular VACUUM operations to:

  • Reclaim dead tuples (bloat removal)
  • Update visibility maps (essential for index-only scans)
  • Prevent transaction ID wraparound — a critical failure condition if ignored

Vacuum Activity Table

The AWR report shows vacuum activity per table during the report period, derived from pg_stat_user_tables deltas:

ColumnDescription
TableSchema and table name
Auto VacuumsNumber of autovacuum runs during the period
Manual VacuumsNumber of manually triggered VACUUM runs
Auto AnalyzesNumber of autoanalyze runs
Manual AnalyzesNumber of manually triggered ANALYZE runs
Live TuplesEstimated live row count at end snapshot
Dead TuplesEstimated dead tuple count at end snapshot
Last AutovacuumTimestamp of the most recent autovacuum run
Last AutoanalyzeTimestamp of the most recent autoanalyze run

{screenshot: awr-vacuum-table}

Interpreting Vacuum Statistics

Tables with no recent autovacuum → May have growing bloat or stale statistics. Check whether autovacuum thresholds are configured appropriately.

High dead tuple counts → Autovacuum may not be running frequently enough for high-churn tables. Consider lowering autovacuum_vacuum_scale_factor for those tables.

Frequent manual vacuums → May indicate autovacuum is insufficient. Review autovacuum configuration or schedule maintenance.

Autovacuum Configuration Tips

For high-churn tables, override autovacuum settings at the table level:

ALTER TABLE high_churn_table SET (
autovacuum_vacuum_scale_factor = 0.01,
autovacuum_analyze_scale_factor = 0.005
);

WAL Activity

{screenshot: awr-wal-activity-section}

Why WAL Matters

The Write-Ahead Log is PostgreSQL's durability mechanism. Every data change is written to WAL before being applied to data files. WAL volume directly impacts:

  • Disk I/O — high WAL generation saturates the WAL disk
  • Replication lag — WAL must be shipped to standbys
  • Recovery time — more WAL = longer crash recovery
  • Checkpoint pressure — frequent checkpoints increase I/O load

WAL Statistics Table

MetricDescription
WAL Generated (bytes)Total WAL volume during the report period
WAL Generated / secAverage WAL generation rate
WAL RecordsNumber of WAL records written
Full Page WritesNumber of full page images written to WAL
WAL Buffers FullNumber of times WAL buffers were flushed early

{screenshot: awr-wal-stats-table}

Checkpoint Statistics

MetricDescription
Checkpoints TimedCheckpoints triggered by checkpoint_timeout
Checkpoints RequestedCheckpoints triggered by WAL volume (max_wal_size)
Checkpoint Write TimeTotal time spent writing dirty buffers during checkpoints
Checkpoint Sync TimeTotal time spent syncing files to disk
Buffers Written (Checkpoint)Dirty buffers flushed by checkpoints
Buffers Written (Background)Dirty buffers flushed by bgwriter

{screenshot: awr-checkpoint-stats}

Interpreting WAL and Checkpoint Data

High Checkpoints Requested vs. Timed → WAL is being generated faster than checkpoint_timeout allows. Increase max_wal_size to reduce checkpoint frequency.

High Checkpoint Write Time → Checkpoint I/O is taking too long — may indicate slow disk or insufficient checkpoint_completion_target. Set checkpoint_completion_target = 0.9 to spread checkpoint writes.

High Full Page Writes → Many pages are being written for the first time after a checkpoint. This is normal after frequent checkpoints; reducing checkpoint frequency also reduces FPW overhead.


Next Steps