diff options
author | Andres Freund <andres@anarazel.de> | 2016-07-18 02:01:13 -0700 |
---|---|---|
committer | Andres Freund <andres@anarazel.de> | 2016-07-18 02:01:13 -0700 |
commit | eca0f1db14ac92d91d54eca8eeff2d15ccd797fa (patch) | |
tree | 293bb9becbd0ca0b53c00816fa7bdd6f08d94f02 /src/backend/access/heap/visibilitymap.c | |
parent | 65632082b7eb3c7d56f1b42e1df452d0f66bc189 (diff) | |
download | postgresql-eca0f1db14ac92d91d54eca8eeff2d15ccd797fa.tar.gz postgresql-eca0f1db14ac92d91d54eca8eeff2d15ccd797fa.zip |
Clear all-frozen visibilitymap status when locking tuples.
Since a892234 & fd31cd265 the visibilitymap's freeze bit is used to
avoid vacuuming the whole relation in anti-wraparound vacuums. Doing so
correctly relies on not adding xids to the heap without also unsetting
the visibilitymap flag. Tuple locking related code has not done so.
To allow selectively resetting all-frozen - to avoid pessimizing
heap_lock_tuple - allow to selectively reset the all-frozen with
visibilitymap_clear(). To avoid having to use
visibilitymap_get_status (e.g. via VM_ALL_FROZEN) inside a critical
section, have visibilitymap_clear() return whether any bits have been
reset.
There's a remaining issue (denoted by XXX): After the PageIsAllVisible()
check in heap_lock_tuple() and heap_lock_updated_tuple_rec() the page
status could theoretically change. Practically that currently seems
impossible, because updaters will hold a page level pin already. Due to
the next beta coming up, it seems better to get the required WAL magic
bump done before resolving this issue.
The added flags field fields to xl_heap_lock and xl_heap_lock_updated
require bumping the WAL magic. Since there's already been a catversion
bump since the last beta, that's not an issue.
Reviewed-By: Robert Haas, Amit Kapila and Andres Freund
Author: Masahiko Sawada, heavily revised by Andres Freund
Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com
Backpatch: -
Diffstat (limited to 'src/backend/access/heap/visibilitymap.c')
-rw-r--r-- | src/backend/access/heap/visibilitymap.c | 18 |
1 files changed, 12 insertions, 6 deletions
diff --git a/src/backend/access/heap/visibilitymap.c b/src/backend/access/heap/visibilitymap.c index b472d31a03c..3ad4a9f5870 100644 --- a/src/backend/access/heap/visibilitymap.c +++ b/src/backend/access/heap/visibilitymap.c @@ -11,7 +11,7 @@ * src/backend/access/heap/visibilitymap.c * * INTERFACE ROUTINES - * visibilitymap_clear - clear a bit in the visibility map + * visibilitymap_clear - clear bits for one page in the visibility map * visibilitymap_pin - pin a map page for setting a bit * visibilitymap_pin_ok - check whether correct map page is already pinned * visibilitymap_set - set a bit in a previously pinned page @@ -159,20 +159,23 @@ static void vm_extend(Relation rel, BlockNumber nvmblocks); /* - * visibilitymap_clear - clear all bits for one page in visibility map + * visibilitymap_clear - clear specified bits for one page in visibility map * * You must pass a buffer containing the correct map page to this function. * Call visibilitymap_pin first to pin the right one. This function doesn't do - * any I/O. + * any I/O. Returns true if any bits have been cleared and false otherwise. */ -void -visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer buf) +bool +visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer buf, uint8 flags) { BlockNumber mapBlock = HEAPBLK_TO_MAPBLOCK(heapBlk); int mapByte = HEAPBLK_TO_MAPBYTE(heapBlk); int mapOffset = HEAPBLK_TO_OFFSET(heapBlk); - uint8 mask = VISIBILITYMAP_VALID_BITS << mapOffset; + uint8 mask = flags << mapOffset; char *map; + bool cleared = false; + + Assert(flags & VISIBILITYMAP_VALID_BITS); #ifdef TRACE_VISIBILITYMAP elog(DEBUG1, "vm_clear %s %d", RelationGetRelationName(rel), heapBlk); @@ -189,9 +192,12 @@ visibilitymap_clear(Relation rel, BlockNumber heapBlk, Buffer buf) map[mapByte] &= ~mask; MarkBufferDirty(buf); + cleared = true; } LockBuffer(buf, BUFFER_LOCK_UNLOCK); + + return cleared; } /* |