aboutsummaryrefslogtreecommitdiff
path: root/src/backend/access
Commit message (Collapse)AuthorAge
* Fail if recovery target is not reachedPeter Eisentraut2020-01-29
| | | | | | | | | | | | Before, if a recovery target is configured, but the archive ended before the target was reached, recovery would end and the server would promote without further notice. That was deemed to be pretty wrong. With this change, if the recovery target is not reached, it is a fatal error. Based-on-patch-by: Leif Gunnar Erlandsen <leif@lako.no> Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/993736dd3f1713ec1f63fc3b653839f5@lako.no
* Fix randAccess setting in ReadRecord()Heikki Linnakangas2020-01-28
| | | | | | | Commit 38a957316d got this backwards. Author: Kyotaro Horiguchi Discussion: https://www.postgresql.org/message-id/20200128.194408.2260703306774646445.horikyota.ntt@gmail.com
* Fix compile error on HP C.Thomas Munro2020-01-28
| | | | Per build farm animal anole, after commit 6f38d4dac3.
* Remove dependency on HeapTuple from predicate locking functions.Thomas Munro2020-01-28
| | | | | | | | | | | | | | | | The following changes make the predicate locking functions more generic and suitable for use by future access methods: - PredicateLockTuple() is renamed to PredicateLockTID(). It takes ItemPointer and inserting transaction ID instead of HeapTuple. - CheckForSerializableConflictIn() takes blocknum instead of buffer. - CheckForSerializableConflictOut() no longer takes HeapTuple or buffer. Author: Ashwin Agrawal Reviewed-by: Andres Freund, Kuntal Ghosh, Thomas Munro Discussion: https://postgr.es/m/CALfoeiv0k3hkEb3Oqk%3DziWqtyk2Jys1UOK5hwRBNeANT_yX%2Bng%40mail.gmail.com
* Refactor XLogReadRecord(), adding XLogBeginRead() function.Heikki Linnakangas2020-01-26
| | | | | | | | | | | | | | | | | | | | The signature of XLogReadRecord() required the caller to pass the starting WAL position as argument, or InvalidXLogRecPtr to continue reading at the end of previous record. That's slightly awkward to the callers, as most of them don't want to randomly jump around in the WAL stream, but start reading at one position and then read everything from that point onwards. Remove the 'RecPtr' argument and add a new function XLogBeginRead() to specify the starting position instead. That's more convenient for the callers. Also, xlogreader holds state that is reset when you change the starting position, so having a separate function for doing that feels like a more natural fit. This changes XLogFindNextRecord() function so that it doesn't reset the xlogreader's state to what it was before the call anymore. Instead, it positions the xlogreader to the found record, like XLogBeginRead(). Reviewed-by: Kyotaro Horiguchi, Alvaro Herrera Discussion: https://www.postgresql.org/message-id/5382a7a3-debe-be31-c860-cb810c08f366%40iki.fi
* Clarify some comments in vacuumlazy.cMichael Paquier2020-01-23
| | | | | Author: Justin Pryzby Discussion: https://postgr.es/m/20200113004542.GA26045@telsasoft.com
* Add GUC ignore_invalid_pages.Fujii Masao2020-01-22
| | | | | | | | | | | | | | | Detection of WAL records having references to invalid pages during recovery causes PostgreSQL to raise a PANIC-level error, aborting the recovery. Setting ignore_invalid_pages to on causes the system to ignore those WAL records (but still report a warning), and continue recovery. This behavior may cause crashes, data loss, propagate or hide corruption, or other serious problems. However, it may allow you to get past the PANIC-level error, to finish the recovery, and to cause the server to start up. Author: Fujii Masao Reviewed-by: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAHGQGwHCK6f77yeZD4MHOnN+PaTf6XiJfEB+Ce7SksSHjeAWtg@mail.gmail.com
* Fix the computation of max dead tuples during the vacuum.Amit Kapila2020-01-22
| | | | | | | | | | | | In commit 40d964ec99, we changed the way memory is allocated for dead tuples but forgot to update the place where we compute the maximum number of dead tuples. This could lead to invalid memory requests. Reported-by: Andres Freund Diagnosed-by: Andres Freund Author: Masahiko Sawada Reviewed-by: Amit Kapila and Dilip Kumar Discussion: https://postgr.es/m/20200121060020.e3cr7s7fj5rw4lok@alap3.anarazel.de
* Fix crash in BRIN inclusion op functions, due to missing datum copy.Heikki Linnakangas2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | The BRIN add_value() and union() functions need to make a longer-lived copy of the argument, if they want to store it in the BrinValues struct also passed as argument. The functions for the "inclusion operator classes" used with box, range and inet types didn't take into account that the union helper function might return its argument as is, without making a copy. Check for that case, and make a copy if necessary. That case arises at least with the range_union() function, when one of the arguments is an 'empty' range: CREATE TABLE brintest (n numrange); CREATE INDEX brinidx ON brintest USING brin (n); INSERT INTO brintest VALUES ('empty'); INSERT INTO brintest VALUES (numrange(0, 2^1000::numeric)); INSERT INTO brintest VALUES ('(-1, 0)'); SELECT brin_desummarize_range('brinidx', 0); SELECT brin_summarize_range('brinidx', 0); Backpatch down to 9.5, where BRIN was introduced. Discussion: https://www.postgresql.org/message-id/e6e1d6eb-0a67-36aa-e779-bcca59167c14%40iki.fi Reviewed-by: Emre Hasegeli, Tom Lane, Alvaro Herrera
* Allow vacuum command to process indexes in parallel.Amit Kapila2020-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | This feature allows the vacuum to leverage multiple CPUs in order to process indexes. This enables us to perform index vacuuming and index cleanup with background workers. This adds a PARALLEL option to VACUUM command where the user can specify the number of workers that can be used to perform the command which is limited by the number of indexes on a table. Specifying zero as a number of workers will disable parallelism. This option can't be used with the FULL option. Each index is processed by at most one vacuum process. Therefore parallel vacuum can be used when the table has at least two indexes. The parallel degree is either specified by the user or determined based on the number of indexes that the table has, and further limited by max_parallel_maintenance_workers. The index can participate in parallel vacuum iff it's size is greater than min_parallel_index_scan_size. Author: Masahiko Sawada and Amit Kapila Reviewed-by: Dilip Kumar, Amit Kapila, Robert Haas, Tomas Vondra, Mahendra Singh and Sergei Kornilov Tested-by: Mahendra Singh and Prabhat Sahu Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com https://postgr.es/m/CAA4eK1J-VoR9gzS5E75pcD-OH0mEyCdp8RihcwKrcuw7J-Q0+w@mail.gmail.com
* Avoid full scan of GIN indexes when possibleAlexander Korotkov2020-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The strategy of GIN index scan is driven by opclass-specific extract_query method. This method that needed search mode is GIN_SEARCH_MODE_ALL. This mode means that matching tuple may contain none of extracted entries. Simple example is '!term' tsquery, which doesn't need any term to exist in matching tsvector. In order to handle such scan key GIN calculates virtual entry, which contains all TIDs of all entries of attribute. In fact this is full scan of index attribute. And typically this is very slow, but allows to handle some queries correctly in GIN. However, current algorithm calculate such virtual entry for each GIN_SEARCH_MODE_ALL scan key even if they are multiple for the same attribute. This is clearly not optimal. This commit improves the situation by introduction of "exclude only" scan keys. Such scan keys are not capable to return set of matching TIDs. Instead, they are capable only to filter TIDs produced by normal scan keys. Therefore, each attribute should contain at least one normal scan key, while rest of them may be "exclude only" if search mode is GIN_SEARCH_MODE_ALL. The same optimization might be applied to the whole scan, not per-attribute. But that leads to NULL values elimination problem. There is trade-off between multiple possible ways to do this. We probably want to do this later using some cost-based decision algorithm. Discussion: https://postgr.es/m/CAOBaU_YGP5-BEt5Cc0%3DzMve92vocPzD%2BXiZgiZs1kjY0cj%3DXBg%40mail.gmail.com Author: Nikita Glukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud Reviewed-by: Julien Rouhaud, Tomas Vondra, Tom Lane
* Introduce IndexAM fields for parallel vacuum.Amit Kapila2020-01-15
| | | | | | | | | | | | | | | | | Introduce new fields amusemaintenanceworkmem and amparallelvacuumoptions in IndexAmRoutine for parallel vacuum. The amusemaintenanceworkmem tells whether a particular IndexAM uses maintenance_work_mem or not. This will help in controlling the memory used by individual workers as otherwise, each worker can consume memory equal to maintenance_work_mem. The amparallelvacuumoptions tell whether a particular IndexAM participates in a parallel vacuum and if so in which phase (bulkdelete, vacuumcleanup) of vacuum. Author: Masahiko Sawada and Amit Kapila Reviewed-by: Dilip Kumar, Amit Kapila, Tomas Vondra and Robert Haas Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com https://postgr.es/m/CAA4eK1LmcD5aPogzwim5Nn58Ki+74a6Edghx4Wd8hAskvHaq5A@mail.gmail.com
* Fix comment in heapam.cMichael Paquier2020-01-13
| | | | | | | Improvement per suggestion from Tom Lane. Author: Daniel Gustafsson Discussion: https://postgr.es/m/FED18699-4270-4778-8DA8-10F119A5ECF3@yesql.se
* Delete empty pages in each pass during GIST VACUUM.Amit Kapila2020-01-13
| | | | | | | | | | | | | | | | | | | | | | | Earlier, we use to postpone deleting empty pages till the second stage of vacuum to amortize the cost of scanning internal pages. However, that can sometimes (say vacuum is canceled or errored between first and second stage) delay the pages to be recycled. Another thing is that to facilitate deleting empty pages in the second stage, we need to share the information about internal and empty pages between different stages of vacuum. It will be quite tricky to share this information via DSM which is required for the upcoming parallel vacuum patch. Also, it will bring the logic to reclaim deleted pages closer to nbtree where we delete empty pages in each pass. Overall, the advantages of deleting empty pages in each pass outweigh the advantages of postponing the same. Author: Dilip Kumar, with changes by Amit Kapila Reviewed-by: Sawada Masahiko and Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LGr+MN0xHZpJ2dfS8QNQ1a_aROKowZB+MPNep8FVtwAA@mail.gmail.com
* Reimplement nullification of walsender timestampAlvaro Herrera2020-01-08
| | | | | | | | | | | | | | | Make the value null only at pg_stat_activity-output time, as suggested by Tom Lane, instead of messing with the internal state. This should appease buildfarm members with force_parallel_mode=regress, which are running parallel queries on logical replication walsenders. The fact that walsenders can run parallel queries should perhaps be studied more carefully, but for the moment let's get rid of the red blots in buildfarm. Backpatch to pg10, like the previous commit. Discussion: https://postgr.es/m/30804.1578438763@sss.pgh.pa.us
* pg_stat_activity: show NULL stmt start time for walsendersAlvaro Herrera2020-01-07
| | | | | | | | | | | | | Returning a non-NULL time is pointless, sinc a walsender is not a process that would be running normal transactions anyway, but the code was unintentionally exposing the process start time intermittently, which was not only bogus but it also confused monitoring systems looking for idle transactions. Fix by avoiding all updates in walsenders. Backpatch to 11, where walsenders started appearing in pg_stat_activity. Reported-by: Tomas Vondra Discussion: https://postgr.es/m/20191209234409.exe7osmyalwkt5j4@development
* tableam: New callback relation_fetch_toast_slice.Robert Haas2020-01-07
| | | | | | | | | | | | | | | | Instead of always calling heap_fetch_toast_slice during detoasting, invoke a table AM callback which, when the toast table is a heap table, will be heap_fetch_toast_slice. This makes it possible for a table AM other than heap to be used as a TOAST table. It also completes the series of commits intended to improve the interaction of tableam with TOAST that began with commit 8b94dab06617ef80a0901ab103ebd8754427ef5a; detoast.c is now, hopefully, fully AM-independent. Patch by me, reviewed by Andres Freund and Peter Eisentraut. Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
* tableam: Allow choice of toast AM.Robert Haas2020-01-07
| | | | | | | | | | | | Previously, the toast table had to be implemented by the same AM that was used for the main table, which was bad, because the detoasting code won't work with anything but heap. This commit doesn't fix the latter problem, although there's another patch coming which does, but it does let you pick something that works (i.e. heap, right now). Patch by me, reviewed by Andres Freund. Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com
* Remove redundant incomplete split assertion.Peter Geoghegan2020-01-05
| | | | | | | The fastpath insert optimization's incomplete split flag Assert() is redundant. We'll reach the more general Assert() within _bt_findinsertloc() in all cases. (Besides, Assert()'ing that the rightmost page doesn't have the flag set never made much sense.)
* Add xl_btree_delete optimization.Peter Geoghegan2020-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 558a9165e08 taught _bt_delitems_delete() to produce its own XID horizon on the primary. Standbys no longer needed to generate their own latestRemovedXid, since they could just use the explicitly logged value from the primary instead. The deleted offset numbers array from the xl_btree_delete WAL record was no longer used by the REDO routine for anything other than deleting the items. This enables a minor optimization: We now treat the array as buffer state, not generic WAL data, following _bt_delitems_vacuum()'s example. This should be a minor win, since it allows us to avoid including the deleted items array in cases where XLogInsert() stores the whole buffer anyway. The primary goal here is to make the code more maintainable, though. Removing inessential differences between the two functions highlights the fundamental differences that remain. Also change xl_btree_delete to use uint32 for the size of the array of item offsets being deleted. This brings xl_btree_delete closer to xl_btree_vacuum. Furthermore, it seems like a good idea to use an explicit-width integer type (the field was previously an "int"). Bump XLOG_PAGE_MAGIC because xl_btree_delete changed. Discussion: https://postgr.es/m/CAH2-Wzkz4TjmezzfAbaV1zYrh=fr0bCpzuJTvBe5iUQ3aUPsCQ@mail.gmail.com
* Clear up btree_xlog_split() alignment comment.Peter Geoghegan2020-01-02
| | | | | | | Adjust a comment that describes how alignment of the new left page high key works in btree_xlog_split(), the nbtree page split REDO routine. The wording used before commit 2c03216d831 is much clearer, so go back to that.
* Correct _bt_delitems_vacuum() lock comments.Peter Geoghegan2020-01-02
| | | | | The expectation within _bt_delitems_vacuum() is that caller has a super-exclusive/cleanup buffer lock (not just a pin and a write lock).
* Revise BTP_HAS_GARBAGE nbtree VACUUM comments.Peter Geoghegan2020-01-01
| | | | | | | | | | | | | | | | | | | | | | | | _bt_delitems_vacuum() comments claimed that it isn't worth another scan of the page to avoid falsely unsetting the BTP_HAS_GARBAGE page flag hint (this happens to be the same wording that was removed from _bt_delitems_delete() by my recent commit fe97c61c). The comments made little sense, though. The issue can't have much to do with performing a second scan of the target leaf page, since an LP_DEAD test could easily be performed in the first scan of the page anyway (the scan that takes place in btvacuumpage() caller). Revise the explanation. It makes much more sense to frame this as an issue about recovery conflicts. _bt_delitems_vacuum() cannot easily generate an XID cutoff in the same way that _bt_delitems_delete() is designed to. Falsely unsetting the page flag is not ideal, and is likely to happen more often than was supposed by the original comments. Explain why it usually isn't a problem in practice. There may be an argument for _bt_delitems_vacuum() not clearing the BTP_HAS_GARBAGE bit, removing the question of it being falsely unset by VACUUM (there may even be an argument for not using a page level hint at all). This can be revisited later.
* Update btree_xlog_delete() comments.Peter Geoghegan2020-01-01
| | | | | | | | | | | Commit fe97c61c updated LP_DEAD item deletion comments, but missed a minor discrepancy on the REDO side. Fix it now. In passing, don't talk about the btree_xlog_vacuum() behavior within btree_xlog_delete(). The reliance on XLOG_HEAP2_CLEANUP_INFO records for recovery conflicts is already discussed within btvacuumpage() and mentioned again in passing above btree_xlog_vacuum(), which seems sufficient.
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Revert "Rename files and headers related to index AM"Michael Paquier2019-12-27
| | | | | | | | This follows multiple complains from Peter Geoghegan, Andres Freund and Alvaro Herrera that this issue ought to be dug more before actually happening, if it happens. Discussion: https://postgr.es/m/20191226144606.GA5659@alvherre.pgsql
* Refactor code dedicated to index vacuuming in vacuumlazy.cMichael Paquier2019-12-26
| | | | | | | | | | | | The part in charge of doing the vacuum on all the indexes of a relation was duplicated, with the same handling for progress reporting done. While on it, update the progress reporting for heap vacuuming in the subroutine doing the actual work, keeping the status update local. This way, any future caller of lazy_vacuum_heap() does not have to worry about doing any progress reporting update. Author: Justin Pryzby, Michael Paquier Discussion: https://postgr.es/m/20191120210600.GC30362@telsasoft.com
* Rename files and headers related to index AMMichael Paquier2019-12-25
| | | | | | | | | | | | | | | | | | | | | The following renaming is done so as source files related to index access methods are more consistent with table access methods (the original names used for index AMs ware too generic, and could be confused as including features related to table AMs): - amapi.h -> indexam.h. - amapi.c -> indexamapi.c. Here we have an equivalent with backend/access/table/tableamapi.c. - amvalidate.c -> indexamvalidate.c. - amvalidate.h -> indexamvalidate.h. - genam.c -> indexgenam.c. - genam.h -> indexgenam.h. This has been discussed during the development of v12 when table AM was worked on, but the renaming never happened. Author: Michael Paquier Reviewed-by: Fabien Coelho, Julien Rouhaud Discussion: https://postgr.es/m/20191223053434.GF34339@paquier.xyz
* Avoid splitting C string literals with \-newlineAlvaro Herrera2019-12-24
| | | | | | | | | | | Using \ is unnecessary and ugly, so remove that. While at it, stitch the literals back into a single line: we've long discouraged splitting error message literals even when they go past the 80 chars line limit, to improve greppability. Leave contrib/tablefunc alone. Discussion: https://postgr.es/m/20191223195156.GA12271@alvherre.pgsql
* Update nbtree LP_DEAD item deletion comments.Peter Geoghegan2019-12-22
| | | | | | | | | | | | | | | | Comments about the consequences of clearing the BTP_HAS_GARBAGE page flag bit that apply only to VACUUM were added to code that deals with opportunistic deletion of LP_DEAD items by commit a760893d. The same comment block was added to both _bt_delitems_vacuum() and _bt_delitems_delete(). Correct _bt_delitems_delete()'s copy of the comment block. _bt_delitems_delete() reliably deletes items that were found by caller to have their LP_DEAD bit set. There is no question about whether or not unsetting the BTP_HAS_GARBAGE bit can miss some LP_DEAD items that were set recently. Also tweak a related section of the nbtree README.
* Remove unneeded "pin scan" nbtree VACUUM code.Peter Geoghegan2019-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The REDO routine for nbtree's xl_btree_vacuum record type hasn't performed a "pin scan" since commit 3e4b7d87 went in, so clearly there isn't any point in VACUUM WAL-logging information that won't actually be used. Finish off the work of commit 3e4b7d87 (and the closely related preceding commit 687f2cd7) by removing the code that generates this unused information. Also remove the REDO routine code disabled by commit 3e4b7d87. Replace the unneeded lastBlockVacuumed field in xl_btree_vacuum with a new "ndeleted" field. The new field isn't actually needed right now, since we could continue to infer the array length from the overall record length. However, an upcoming patch to add deduplication to nbtree needs to add an "items updated" field to xl_btree_vacuum, so we might as well start being explicit about the number of items now. (Besides, it doesn't seem like a good idea to leave the xl_btree_vacuum struct without any fields; the C standard says that that's undefined.) nbtree VACUUM no longer forces writing a WAL record for the last block in the index. Writing out a WAL record with no items for the final block was supposed to force processing of a lastBlockVacuumed field by a pin scan. Bump XLOG_PAGE_MAGIC because xl_btree_vacuum changed. Discussion: https://postgr.es/m/CAH2-WzmY_mT7UnTzFB5LBQDBkKpdV5UxP3B5bLb7uP%3D%3D6UQJRQ%40mail.gmail.com
* revert: Remove meaningless assignments in nbtree codeBruce Momjian2019-12-19
| | | | | | | | | | Reverts commit 05684c8255. Reported-by: Tom Lane Discussion: https://postgr.es/m/404.1576770942@sss.pgh.pa.us Backpatch-through: master
* Remove meaningless assignments in nbtree codeBruce Momjian2019-12-19
| | | | | | | | Reported-by: Ranier Vilela Discussion: https://postgr.es/m/MN2PR18MB2927BB876D12A70FDBE8F35AE3450@MN2PR18MB2927.namprd18.prod.outlook.com Backpatch-through: master
* Fix minor problems with non-exclusive backup cleanup.Robert Haas2019-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous coding imagined that it could call before_shmem_exit() when a non-exclusive backup began and then remove the previously-added handler by calling cancel_before_shmem_exit() when that backup ended. However, this only works provided that nothing else in the system has registered a before_shmem_exit() hook in the interim, because cancel_before_shmem_exit() is documented to remove a callback only if it is the latest callback registered. It also only works if nothing can ERROR out between the time that sessionBackupState is reset and the time that cancel_before_shmem_exit(), which doesn't seem to be strictly true. To fix, leave the handler installed for the lifetime of the session, arrange to install it just once, and teach it to quietly do nothing if there isn't a non-exclusive backup in process. This is a bug, but for now I'm not going to back-patch, because the consequences are minor. It's possible to cause a spurious warning to be generated, but that doesn't really matter. It's also possible to trigger an assertion failure, but production builds shouldn't have assertions enabled. Patch by me, reviewed by Kyotaro Horiguchi, Michael Paquier (who preferred a different approach, but got outvoted), Fujii Masao, and Tom Lane, and with comments by various others. Discussion: http://postgr.es/m/CA+TgmobMjnyBfNhGTKQEDbqXYE3_rXWpc4CM63fhyerNCes3mA@mail.gmail.com
* Move heap-specific detoasting logic into a separate function.Robert Haas2019-12-18
| | | | | | | | | | | | | The new function, heap_fetch_toast_slice, is shared between toast_fetch_datum_slice and toast_fetch_datum, and does all the work of scanning the TOAST table, fetching chunks, and storing them into the space allocated for the result varlena. As an incidental side effect, this allows toast_fetch_datum_slice to perform the scan with only a single scankey if all chunks are being fetched, which might have some tiny performance benefit. Discussion: http://postgr.es/m/CA+TgmobBzxwFojJ0zV0Own3dr09y43hp+OzU2VW+nos4PMXWEg@mail.gmail.com
* Fix compiler warning in non-assert buildsMichael Paquier2019-12-18
| | | | | | | Oversight in commit e1551f9. Reported-by: Erik Rijkers Discussion: https://postgr.es/m/b7ad911d3eaa29af9fcdb9ccb26c363c@xs4all.nl
* Refactor attribute mappings used in logical tuple conversionMichael Paquier2019-12-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Tuple conversion support in tupconvert.c is able to convert rowtypes between two relations, inner and outer, which are logically equivalent but have a different ordering or even dropped columns (used mainly for inheritance tree and partitions). This makes use of attribute mappings, which are simple arrays made of AttrNumber elements with a length matching the number of attributes of the outer relation. The length of the attribute mapping has been treated as completely independent of the mapping itself until now, making it easy to pass down an incorrect mapping length. This commit refactors the code related to attribute mappings and moves it into an independent facility called attmap.c, extracted from tupconvert.c. This merges the attribute mapping with its length, avoiding to try to guess what is the length of a mapping to use as this is computed once, when the map is built. This will avoid mistakes like what has been fixed in dc816e58, which has used an incorrect mapping length by matching it with the number of attributes of an inner relation (a child partition) instead of an outer relation (a partitioned table). Author: Michael Paquier Reviewed-by: Amit Langote Discussion: https://postgr.es/m/20191121042556.GD153437@paquier.xyz
* Remove shadow variables linked to RedoRecPtr in xlog.cMichael Paquier2019-12-18
| | | | | | | | | | | | | | | This changes the routines in charge of recycling WAL segments past the last redo LSN to not use anymore "RedoRecPtr" as a local variable, which is also available in the context of the session as a static declaration, replacing it with "lastredoptr". This confusion has been introduced by d9fadbf, so backpatch down to v11 like the other commit. Thanks to Tom Lane, Robert Haas, Alvaro Herrera, Mark Dilger and Kyotaro Horiguchi for the input provided. Author: Ranier Vilela Discussion: https://postgr.es/m/MN2PR18MB2927F7B5F690065E1194B258E35D0@MN2PR18MB2927.namprd18.prod.outlook.com Backpatch-through: 11
* Fix bad formula in previous commit.Robert Haas2019-12-17
| | | | | | | Commit d5406dea25b600408e7acf17d5a06e82d3ce6d0d used a slightly novel, and wrong, approach to compute the length of the last toast chunk. It worked fine unless the last chunk happened to have the largest possible size.
* Code cleanup for toast_fetch_datum and toast_fetch_datum_slice.Robert Haas2019-12-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | Rework some of the checks for bad TOAST chunks to be a bit simpler and easier to understand. These checks verify that (1) we get all and only the chunk numbers we expect to see and (2) each chunk has the expected size. However, the existing code was a bit hard to understand, at least for me; try to make it clearer. As part of that, have toast_fetch_datum_slice check the relationship between endchunk and totalchunks only with an Assert() rather than checking every chunk number against both values. There's no need to check that relationship in production builds because it's not a function of whether on-disk corruption is present; it's just a question of whether the code does the right math. Also, have toast_fetch_datum_slice() use ereport(ERROR) rather than elog(ERROR). Commit fd6ec93bf890314ac694dc8a7f3c45702ecc1bbd made the two functions inconsistent with each other. Rename assorted variables for better clarity and consistency, and move assorted variables from function scope to the function's main loop. Remove a few variables that are used only once entirely. Patch by me, reviewed by Peter Eisentraut. Discussion: http://postgr.es/m/CA+TgmobBzxwFojJ0zV0Own3dr09y43hp+OzU2VW+nos4PMXWEg@mail.gmail.com
* Change overly strict Assert in TransactionGroupUpdateXidStatus.Amit Kapila2019-12-17
| | | | | | | | | | | | | | | | | | This Assert thought that an overflowed transaction can never get registered for the group update.  But that is not true, because even when the number of children for a transaction got reduced, the overflow flag is not changed. And, for group update, we only care about the current number of children for a transaction that is being committed. Based on comments by Andres Freund, remove a redundant Assert in TransactionIdSetPageStatus as we already had a static Assert for the same condition a few lines earlier. Reported-by: Vignesh C Author: Dilip Kumar Reviewed-by: Amit Kapila Backpatch-through: 11 Discussion: https://postgr.es/m/CAFiTN-s5=uJw-Z6JC9gcqtBSjXsrHnU63PXBrA=pnBjqnkm5UA@mail.gmail.com
* Rename nbtree tuple macros.Peter Geoghegan2019-12-16
| | | | | Rename two function-style macros, removing the word "inner". This makes things more consistent.
* Update nbtree README's "Scans during Recovery".Peter Geoghegan2019-12-16
| | | | | | | | | get_actual_variable_range() hasn't used a dirty snapshot since commit 3ca930fc3, which invented a new snapshot type specifically to meet selfuncs.c's requirements (HeapTupleSatisfiesNonVacuumable() type snapshots were added). Discussion: https://postgr.es/m/CAH2-Wzn2pSqEOcBDAA40CnO82oEy-EOpE2bNh_XL_cfFoA86jw@mail.gmail.com
* Demote variable from global to localAlvaro Herrera2019-12-16
| | | | | | | | | | recoveryDelayUntilTime was introduced by commit 36da3cfb457b as a global because its method of operation was devilishly intrincate. Commit c945af80cfda removed all that complexity and could have turned it into a local variable, but didn't. Do so now. Discussion: https://postgr.es/m/20191213200751.GA10731@alvherre.pgsql Reviewed-by: Michaël Paquier, Daniel Gustafsson
* Fix yet another crash in page split during GiST index creation.Heikki Linnakangas2019-12-16
| | | | | | | | | | | | | | | | Commit a7ee7c8513 fixed a bug in GiST page split during index creation, where we failed to re-find the position of a downlink after the page containing it was split. However, that fix was incomplete; the other call to gistinserttuples() in the same function needs to also clear 'downlinkoffnum'. Fixes bug #16134 reported by Alexander Lakhin, for real this time. The previous fix was enough to fix the crash with the reproducer script for bug #16162, but the original script for #16134 was still crashing. Backpatch to v12, like the previous incomplete fix. Discussion: https://www.postgresql.org/message-id/d869f537-abe4-d2ea-0510-38cd053f5152%40gmail.com
* Remove duplicated progress reporting during heap scan of VACUUMMichael Paquier2019-12-15
| | | | | | | | | This has been introduced by c16dc1a since progress reporting for VACUUM has been added. As this issue just causes some extra work and is harmless, no backpatch is done. Author: Justin Pryzby Discussion: https://postgr.es/m/20191213030831.GT2082@telsasoft.com
* Fix crash when a page was split during GiST index creation.Heikki Linnakangas2019-12-13
| | | | | | | | | | | | | | | | | | | | | The bug was similar to the one that was fixed in commit 22251686f0. When we split page X and insert the downlink for the new page, the parent page might also need to be split. When that happens, the downlink offset number we remembered for X is no longer valid. We correctly called gistFindCorrectParent() to re-find it, but gistFindCorrectParent() doesn't do anything if the LSN of the page hasn't changed, and we stopped updating LSNs during index build in commit 9155580fd5. The buggy codepath was taken if the page was split into three or more pages, and inserting the downlink caused the parent page to split. To fix, explicitly mark the downlink offset number as invalid, to force gistFindCorrectParent() to re-find it. Fixes bug #16134 reported by Alexander Lakhin, reported again as #16162 by Andreas Kunert. Thanks to Jeff Janes, Tom Lane and Tomas Vondra for debugging. Backpatch to v12, where we stopped WAL-logging during index build. Discussion: https://www.postgresql.org/message-id/16134-0423f729671dec64%40postgresql.org Discussion: https://www.postgresql.org/message-id/16162-45d21b7b6c1a3105%40postgresql.org
* Fix thinkos from commit 9989d37Michael Paquier2019-12-03
| | | | | | Error messages referring to incorrect WAL segment names could have been generated for a fsync() failure or when creating a new segment at the end of recovery.
* Remove XLogFileNameP() from the treeMichael Paquier2019-12-03
| | | | | | | | | | | | | | | | | | | | XLogFileNameP() is a wrapper routine able to build a palloc'd string for a WAL segment name, which is used for error string generation. There were several code paths where it gets called in a critical section, where memory allocation is not allowed. This results in triggering an assertion failure instead of generating the wanted error message. Another, more annoying, problem is that if the allocation to generate the WAL segment name fails on OOM, then the failure would be escalated to a PANIC. This removes the routine and all its callers are replaced with a logic using a fixed-size buffer. This way, all the existing mistakes are fixed and future ones are prevented. Author: Masahiko Sawada Reviewed-by: Michael Paquier, Álvaro Herrera Discussion: https://postgr.es/m/CA+fd4k5gC9H4uoWMLg9K_QfNrnkkdEw+-AFveob9YX7z8JnKTA@mail.gmail.com
* Remove useless "return;" linesAlvaro Herrera2019-11-28
| | | | Discussion: https://postgr.es/m/20191128144653.GA27883@alvherre.pgsql