diff options
author | Masahiko Sawada <msawada@postgresql.org> | 2024-04-02 10:15:37 +0900 |
---|---|---|
committer | Masahiko Sawada <msawada@postgresql.org> | 2024-04-02 10:15:37 +0900 |
commit | 667e65aac354975c6f8090c6146fceb8d7b762d6 (patch) | |
tree | ecefa5c922788f32aa643e4639561d7ba9ac1801 /doc/src | |
parent | d5d2205c8ddc6670fa87474e172fdfab162b7a73 (diff) | |
download | postgresql-667e65aac354975c6f8090c6146fceb8d7b762d6.tar.gz postgresql-667e65aac354975c6f8090c6146fceb8d7b762d6.zip |
Use TidStore for dead tuple TIDs storage during lazy vacuum.
Previously, we used a simple array for storing dead tuple IDs during
lazy vacuum, which had a number of problems:
* The array used a single allocation and so was limited to 1GB.
* The allocation was pessimistically sized according to table size.
* Lookup with binary search was slow because of poor CPU cache and
branch prediction behavior.
This commit replaces that array with the TID store from commit
30e144287a.
Since the backing radix tree makes small allocations as needed, the
1GB limit is now gone. Further, the total memory used is now often
smaller by an order of magnitude or more, depending on the
distribution of blocks and offsets. These two features should make
multiple rounds of heap scanning and index cleanup an extremely rare
event. TID lookup during index cleanup is also several times faster,
even more so when index order is correlated with heap tuple order.
Since there is no longer a predictable relationship between the number
of dead tuples vacuumed and the space taken up by their TIDs, the
number of tuples no longer provides any meaningful insights for users,
nor is the maximum number predictable. For that reason this commit
also changes to byte-based progress reporting, with the relevant
columns of pg_stat_progress_vacuum renamed accordingly to
max_dead_tuple_bytes and dead_tuple_bytes.
For parallel vacuum, both the TID store and supplemental information
specific to vacuum are shared among the parallel vacuum workers. As
with the previous array, we don't take any locks on TidStore during
parallel vacuum since writes are still only done by the leader
process.
Bump catalog version.
Reviewed-by: John Naylor, (in an earlier version) Dilip Kumar
Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
Diffstat (limited to 'doc/src')
-rw-r--r-- | doc/src/sgml/config.sgml | 12 | ||||
-rw-r--r-- | doc/src/sgml/monitoring.sgml | 8 |
2 files changed, 4 insertions, 16 deletions
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index f65c17e5ae4..0e9617bcff4 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1919,11 +1919,6 @@ include_dir 'conf.d' too high. It may be useful to control for this by separately setting <xref linkend="guc-autovacuum-work-mem"/>. </para> - <para> - Note that for the collection of dead tuple identifiers, - <command>VACUUM</command> is only able to utilize up to a maximum of - <literal>1GB</literal> of memory. - </para> </listitem> </varlistentry> @@ -1946,13 +1941,6 @@ include_dir 'conf.d' <filename>postgresql.conf</filename> file or on the server command line. </para> - <para> - For the collection of dead tuple identifiers, autovacuum is only able - to utilize up to a maximum of <literal>1GB</literal> of memory, so - setting <varname>autovacuum_work_mem</varname> to a value higher than - that has no effect on the number of dead tuples that autovacuum can - collect while scanning a table. - </para> </listitem> </varlistentry> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml index 8736eac2841..6a74e4a24df 100644 --- a/doc/src/sgml/monitoring.sgml +++ b/doc/src/sgml/monitoring.sgml @@ -6237,10 +6237,10 @@ FROM pg_stat_get_backend_idset() AS backendid; <row> <entry role="catalog_table_entry"><para role="column_definition"> - <structfield>max_dead_tuples</structfield> <type>bigint</type> + <structfield>max_dead_tuple_bytes</structfield> <type>bigint</type> </para> <para> - Number of dead tuples that we can store before needing to perform + Amount of dead tuple data that we can store before needing to perform an index vacuum cycle, based on <xref linkend="guc-maintenance-work-mem"/>. </para></entry> @@ -6248,10 +6248,10 @@ FROM pg_stat_get_backend_idset() AS backendid; <row> <entry role="catalog_table_entry"><para role="column_definition"> - <structfield>num_dead_tuples</structfield> <type>bigint</type> + <structfield>dead_tuple_bytes</structfield> <type>bigint</type> </para> <para> - Number of dead tuples collected since the last index vacuum cycle. + Amount of dead tuple data collected since the last index vacuum cycle. </para></entry> </row> |