diff options
author | Melanie Plageman <melanieplageman@gmail.com> | 2024-10-25 10:11:58 -0400 |
---|---|---|
committer | Melanie Plageman <melanieplageman@gmail.com> | 2024-10-25 10:11:58 -0400 |
commit | de380a62b5dae610b3504b5036e5d5b1150cc4a4 (patch) | |
tree | e1e874e8a346f1e7107c5b935f8e12c7b741ab58 /src/backend/access/heap/heapam_handler.c | |
parent | 7bd7aa4d30676de006636bb2c9c079c363d9d56c (diff) | |
download | postgresql-de380a62b5dae610b3504b5036e5d5b1150cc4a4.tar.gz postgresql-de380a62b5dae610b3504b5036e5d5b1150cc4a4.zip |
Make table_scan_bitmap_next_block() async-friendly
Move all responsibility for indicating a block is exhuasted into
table_scan_bitmap_next_tuple() and advance the main iterator in
heap-specific code. This flow control makes more sense and is a step
toward using the read stream API for bitmap heap scans.
Previously, table_scan_bitmap_next_block() returned false to indicate
table_scan_bitmap_next_tuple() should not be called for the tuples on
the page. This happened both when 1) there were no visible tuples on the
page and 2) when the block returned by the iterator was past the end of
the table. BitmapHeapNext() (generic bitmap table scan code) handled the
case when the bitmap was exhausted.
It makes more sense for table_scan_bitmap_next_tuple() to return false
when there are no visible tuples on the page and
table_scan_bitmap_next_block() to return false when the bitmap is
exhausted or there are no more blocks in the table.
As part of this new design, TBMIterateResults are no longer used as a
flow control mechanism in BitmapHeapNext(), so we removed
table_scan_bitmap_next_tuple's TBMIterateResult parameter.
Note that the prefetch iterator is still saved in the
BitmapHeapScanState node and advanced in generic bitmap table scan code.
This is because 1) it was not necessary to change the prefetch iterator
location to change the flow control in BitmapHeapNext() 2) modifying
prefetch iterator management requires several more steps better split
over multiple commits and 3) the prefetch iterator will be removed once
the read stream API is used.
Author: Melanie Plageman
Reviewed-by: Tomas Vondra, Andres Freund, Heikki Linnakangas, Mark Dilger
Discussion: https://postgr.es/m/063e4eb4-32d9-439e-a0b1-75565a9835a8%40iki.fi
Diffstat (limited to 'src/backend/access/heap/heapam_handler.c')
-rw-r--r-- | src/backend/access/heap/heapam_handler.c | 56 |
1 files changed, 42 insertions, 14 deletions
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c index 166aab7a93c..a8d95e0f1c1 100644 --- a/src/backend/access/heap/heapam_handler.c +++ b/src/backend/access/heap/heapam_handler.c @@ -2115,18 +2115,49 @@ heapam_estimate_rel_size(Relation rel, int32 *attr_widths, static bool heapam_scan_bitmap_next_block(TableScanDesc scan, - TBMIterateResult *tbmres, + BlockNumber *blockno, bool *recheck, uint64 *lossy_pages, uint64 *exact_pages) { HeapScanDesc hscan = (HeapScanDesc) scan; - BlockNumber block = tbmres->blockno; + BlockNumber block; Buffer buffer; Snapshot snapshot; int ntup; + TBMIterateResult *tbmres; hscan->rs_cindex = 0; hscan->rs_ntuples = 0; + *blockno = InvalidBlockNumber; + *recheck = true; + + do + { + CHECK_FOR_INTERRUPTS(); + + if (scan->st.bitmap.rs_shared_iterator) + tbmres = tbm_shared_iterate(scan->st.bitmap.rs_shared_iterator); + else + tbmres = tbm_iterate(scan->st.bitmap.rs_iterator); + + if (tbmres == NULL) + return false; + + /* + * Ignore any claimed entries past what we think is the end of the + * relation. It may have been extended after the start of our scan (we + * only hold an AccessShareLock, and it could be inserts from this + * backend). We don't take this optimization in SERIALIZABLE + * isolation though, as we need to examine all invisible tuples + * reachable by the index. + */ + } while (!IsolationIsSerializable() && + tbmres->blockno >= hscan->rs_nblocks); + + /* Got a valid block */ + *blockno = tbmres->blockno; + *recheck = tbmres->recheck; + /* * We can skip fetching the heap page if we don't need any fields from the * heap, the bitmap entries don't need rechecking, and all tuples on the @@ -2145,16 +2176,7 @@ heapam_scan_bitmap_next_block(TableScanDesc scan, return true; } - /* - * Ignore any claimed entries past what we think is the end of the - * relation. It may have been extended after the start of our scan (we - * only hold an AccessShareLock, and it could be inserts from this - * backend). We don't take this optimization in SERIALIZABLE isolation - * though, as we need to examine all invisible tuples reachable by the - * index. - */ - if (!IsolationIsSerializable() && block >= hscan->rs_nblocks) - return false; + block = tbmres->blockno; /* * Acquire pin on the target heap page, trading in any pin we held before. @@ -2249,12 +2271,18 @@ heapam_scan_bitmap_next_block(TableScanDesc scan, else (*lossy_pages)++; - return ntup > 0; + /* + * Return true to indicate that a valid block was found and the bitmap is + * not exhausted. If there are no visible tuples on this page, + * hscan->rs_ntuples will be 0 and heapam_scan_bitmap_next_tuple() will + * return false returning control to this function to advance to the next + * block in the bitmap. + */ + return true; } static bool heapam_scan_bitmap_next_tuple(TableScanDesc scan, - TBMIterateResult *tbmres, TupleTableSlot *slot) { HeapScanDesc hscan = (HeapScanDesc) scan; |