aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
Commit message (Collapse)AuthorAge
* Fix buffer refcount leak with FDW bulk insertsMichael Paquier2023-04-25
| | | | | | | | | | | | | | | | | | | | | The leak would show up when using batch inserts with foreign tables included in a partition tree, as the slots used in the batch were not reset once processed. In order to fix this problem, some ExecClearTuple() are added to clean up the slots used once a batch is filled and processed, mapping with the number of slots currently in use as tracked by the counter ri_NumSlots. This buffer refcount leak has been introduced in b676ac4 with the addition of the executor facility to improve bulk inserts for FDWs, so backpatch down to 14. Alexander has provided the patch (slightly modified by me). The test for postgres_fdw comes from me, based on the test case that the author has sent in the report. Author: Alexander Pyhalov Discussion: https://postgr.es/m/b035780a740efd38dc30790c76927255@postgrespro.ru Backpatch-through: 14
* Fix assignment to array of domain over composite, redux.Tom Lane2023-04-15
| | | | | | | | | | | | | | | | Commit 3e310d837 taught isAssignmentIndirectionExpr() to look through CoerceToDomain nodes. That's not sufficient, because since commit 04fe805a1 it's been possible for the planner to simplify CoerceToDomain to RelabelType when the domain has no constraints to enforce. So we need to look through RelabelType too. Per bug #17897 from Alexander Lakhin. Although 3e310d837 was back-patched to v11, it seems sufficient to apply this change to v12 and later, since 04fe805a1 came in in v12. Dmitry Dolgov Discussion: https://postgr.es/m/17897-4216c546c3874044@postgresql.org
* Fix oversights in array manipulation.Tom Lane2023-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The nested-arrays code path in ExecEvalArrayExpr() used palloc to allocate the result array, whereas every other array-creating function has used palloc0 since 18c0b4ecc. This mostly works, but unused bits past the end of the nulls bitmap may end up undefined. That causes valgrind complaints with -DWRITE_READ_PARSE_PLAN_TREES, and could cause planner misbehavior as cited in 18c0b4ecc. There seems no very good reason why we should strive to avoid palloc0 in just this one case, so fix it the easy way with s/palloc/palloc0/. While looking at that I noted that we also failed to check for overflow of "nbytes" and "nitems" while summing the sizes of the sub-arrays, potentially allowing a crash due to undersized output allocation. For "nbytes", follow the policy used by other array-munging code of checking for overflow after each addition. (As elsewhere, the last addition of the array's overhead space doesn't need an extra check, since palloc itself will catch a value between 1Gb and 2Gb.) For "nitems", there's no very good reason to sum the inputs at all, since we can perfectly well use ArrayGetNItems' result instead of ignoring it. Per discussion of this bug, also remove redundant zeroing of the nulls bitmap in array_set_element and array_set_slice. Patch by Alexander Lakhin and myself, per bug #17858 from Alexander Lakhin; thanks also to Richard Guo. These bugs are a dozen years old, so back-patch to all supported branches. Discussion: https://postgr.es/m/17858-8fd287fd3663d051@postgresql.org
* Ignore generated columns during apply of update/delete.Amit Kapila2023-03-23
| | | | | | | | | | | | We fail to apply updates and deletes when the REPLICA IDENTITY FULL is used for the table having generated columns. We didn't use to ignore generated columns while doing tuple comparison among the tuples from the publisher and subscriber during apply of updates and deletes. Author: Onder Kalaci Reviewed-by: Shi yu, Amit Kapila Backpatch-through: 12 Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
* Ignore dropped columns during apply of update/delete.Amit Kapila2023-03-21
| | | | | | | | | | | We fail to apply updates and deletes when the REPLICA IDENTITY FULL is used for the table having dropped columns. We didn't use to ignore dropped columns while doing tuple comparison among the tuples from the publisher and subscriber during apply of updates and deletes. Author: Onder Kalaci, Shi yu Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
* Fix race in parallel hash join batch cleanup, take II.Thomas Munro2023-03-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With unlucky timing and parallel_leader_participation=off (not the default), PHJ could attempt to access per-batch shared state just as it was being freed. There was code intended to prevent that by checking for a cleared pointer, but it was racy. Fix, by introducing an extra barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to access the per-batch state to find a batch to help with, and PHJ_BUILD_DONE means that it is too late. The last to detach will free the array of per-batch state as before, but now it will also atomically advance the phase, so that late attachers can avoid the hazard. This mirrors the way per-batch hash tables are freed (see phases PHJ_BATCH_PROBING and PHJ_BATCH_DONE). An earlier attempt to fix this (commit 3b8981b6, later reverted) missed one special case. When the inner side is empty (the "empty inner optimization), the build barrier would only make it to PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from the hashtable. In that case, fast-forward the build barrier to PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold and we can still negotiate who is cleaning up. Revealed by build farm failures, where BarrierAttach() failed a sanity check assertion, because the memory had been clobbered by dsa_free(). In non-assert builds, the result could be a segmentation fault. Back-patch to all supported releases. Author: Thomas Munro <thomas.munro@gmail.com> Author: Melanie Plageman <melanieplageman@gmail.com> Reported-by: Michael Paquier <michael@paquier.xyz> Reported-by: David Geier <geidav.pg@gmail.com> Tested-by: David Geier <geidav.pg@gmail.com> Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
* Fix memory leak in Memoize cache key evaluationDavid Rowley2023-03-20
| | | | | | | | | | | | | | | | | | When probing the Memoize cache to check if the current cache key values exist in the cache, we perform an evaluation of the expressions making up the cache key before probing the hash table for those values. This operation could leak memory as it is possible that the cache key is an expression which requires allocation of memory, as was the case in bug 17844. Here we fix this by correctly switching to the per tuple context before evaluating the cache expressions so that the memory is freed next time the per tuple context is reset. Bug: 17844 Reported-by: Alexey Ermakov Discussion: https://postgr.es/m/17844-d2f6f9e75a622bed@postgresql.org Backpatch-through: 14, where Memoize was introduced
* Fix some more cases of missed GENERATED-column updates.Tom Lane2023-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If UPDATE is forced to retry after an EvalPlanQual check, it neglected to repeat GENERATED-column computations, even though those might well have changed since we're dealing with a different tuple than before. Fixing this is mostly a matter of looping back a bit further when we retry. In v15 and HEAD that's most easily done by altering the API of ExecUpdateAct so that it includes computing GENERATED expressions. Also, if an UPDATE in a partitioned table turns into a cross-partition INSERT operation, we failed to recompute GENERATED columns. That's a bug since 8bf6ec3ba allowed partitions to have different generation expressions; although it seems to have no ill effects before that. Fixing this is messier because we can now have situations where the same query needs both the UPDATE-aligned set of GENERATED columns and the INSERT-aligned set, and it's unclear which set will be generated first (else we could hack things by forcing the INSERT-aligned set to be generated, which is indeed how fe9e658f4 made it work for MERGE). The best fix seems to be to build and store separate sets of expressions for the INSERT and UPDATE cases. That would create ABI issues in the back branches, but so far it seems we can leave this alone in the back branches. Per bug #17823 from Hisahiro Kauchi. The first part of this affects all branches back to v12 where GENERATED columns were added. Discussion: https://postgr.es/m/17823-b64909cf7d63de84@postgresql.org
* Fix MULTIEXPR_SUBLINK with partitioned target tables, yet again.Tom Lane2023-02-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We already tried to fix this in commits 3f7323cbb et al (and follow-on fixes), but now it emerges that there are still unfixed cases; moreover, these cases affect all branches not only pre-v14. I thought we had eliminated all cases of making multiple clones of an UPDATE's target list when we nuked inheritance_planner. But it turns out we still do that in some partitioned-UPDATE cases, notably including INSERT ... ON CONFLICT UPDATE, because ExecInitPartitionInfo thinks it's okay to clone and modify the parent's targetlist. This fix is based on a suggestion from Andres Freund: let's stop abusing the ParamExecData.execPlan mechanism, which was only ever meant to handle initplans, and instead solve the execution timing problem by having the expression compiler move MULTIEXPR_SUBLINK steps to the front of their expression step lists. This is feasible because (a) all branches still in support compile the entire targetlist of an UPDATE into a single ExprState, and (b) we know that all MULTIEXPR_SUBLINKs do need to be evaluated --- none could be buried inside a CASE, for example. There is a minor semantics change concerning the order of execution of the MULTIEXPR's subquery versus other parts of the parent targetlist, but that seems like something we can get away with. By doing that, we no longer need to worry about whether different clones of a MULTIEXPR_SUBLINK share output Params; their usage of that data structure won't overlap. Per bug #17800 from Alexander Lakhin. Back-patch to all supported branches. In v13 and earlier, we can revert 3f7323cbb and follow-on fixes; however, I chose to keep the SubPlan.subLinkId field added in ccbb54c72. We don't need that anymore in the core code, but it's cheap enough to fill, and removing a plan node field in a minor release seems like it'd be asking for trouble. Andres Freund and Tom Lane Discussion: https://postgr.es/m/17800-ff90866b3906c964@postgresql.org
* Add missing support for the latest SPI status codes.Dean Rasheed2023-02-22
| | | | | | | | | | | | | | | | | | | SPI_result_code_string() was missing support for SPI_OK_TD_REGISTER, and in v15 and later, it was missing support for SPI_OK_MERGE, as was pltcl_process_SPI_result(). The last of those would trigger an error if a MERGE was executed from PL/Tcl. The others seem fairly innocuous, but worth fixing. Back-patch to all supported branches. Before v15, this is just adding SPI_OK_TD_REGISTER to SPI_result_code_string(), which is unlikely to be seen by anyone, but seems worth doing for completeness. Reviewed by Tom Lane. Discussion: https://postgr.es/m/CAEZATCUg8V%2BK%2BGcafOPqymxk84Y_prXgfe64PDoopjLFH6Z0Aw%40mail.gmail.com https://postgr.es/m/CAEZATCUMe%2B_KedPMM9AxKqm%3DSZogSxjUcrMe%2BsakusZh3BFcQw%40mail.gmail.com
* Disable WindowAgg inverse transitions when subplans are presentDavid Rowley2023-02-13
| | | | | | | | | | | | | | | | | | | | | | | | | When an aggregate function is used as a WindowFunc and a tuple transitions out of the window frame, we ordinarily try to make use of the aggregate function's inverse transition function to "unaggregate" the exiting tuple. This optimization is disabled for various cases, including when the aggregate contains a volatile function. In such a case we'd be unable to ensure that the transition value was calculated to the same value during transitions and inverse transitions. Unfortunately, we did this check by calling contain_volatile_functions() which does not recursively search SubPlans for volatile functions. If the aggregate function's arguments or its FILTER clause contained a subplan with volatile functions then we'd fail to notice this. Here we fix this by just disabling the optimization when the WindowFunc contains any subplans. Volatile functions are not the only reason that a subplan may have nonrepeatable results. Bug: #17777 Reported-by: Anban Company Discussion: https://postgr.es/m/17777-860b739b6efde977%40postgresql.org Reviewed-by: Tom Lane Backpatch-through: 11
* Make new GENERATED-expressions code more bulletproof.Tom Lane2023-01-15
| | | | | | | | | | | | | | | | | In commit 8bf6ec3ba I assumed that no code path could reach ExecGetExtraUpdatedCols without having gone through ExecInitStoredGenerated. That turns out not to be the case in logical replication: if there's an ON UPDATE trigger on the target table, trigger.c will call this code before anybody has set up its generated columns. Having seen that, I don't have a lot of faith in there not being other such paths. ExecGetExtraUpdatedCols can call ExecInitStoredGenerated for itself, as long as we are willing to assume that it is only called in CMD_UPDATE operations, which on the whole seems like a safer leap of faith. Per report from Vitaly Davydov. Discussion: https://postgr.es/m/d259d69652b8c2ff50e14cda3c236c7f@postgrespro.ru
* Fix calculation of which GENERATED columns need to be updated.Tom Lane2023-01-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were identifying the updatable generated columns of inheritance children by transposing the calculation made for their parent. However, there's nothing that says a traditional-inheritance child can't have generated columns that aren't there in its parent, or that have different dependencies than are in the parent's expression. (At present it seems that we don't enforce that for partitioning either, which is likely wrong to some degree or other; but the case clearly needs to be handled with traditional inheritance.) Hence, drop the very-klugy-anyway "extraUpdatedCols" RTE field in favor of identifying which generated columns depend on updated columns during executor startup. In HEAD we can remove extraUpdatedCols altogether; in back branches, it's still there but always empty. Another difference between the HEAD and back-branch versions of this patch is that in HEAD we can add the new bitmap field to ResultRelInfo, but that would cause an ABI break in back branches. Like 4b3e37993, add a List field at the end of struct EState instead. Back-patch to v13. The bogus calculation is also being made in v12, but it doesn't have the same visible effect because we don't use it to decide which generated columns to recalculate; as a consequence of which the patch doesn't apply easily. I think that there might still be a demonstrable bug associated with trigger firing conditions, but that's such a weird corner-case usage that I'm content to leave it unfixed in v12. Amit Langote and Tom Lane Discussion: https://postgr.es/m/CA+HiwqFshLKNvQUd1DgwJ-7tsTp=dwv7KZqXC4j2wYBV1aCDUA@mail.gmail.com Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
* Avoid reference to nonexistent array element in ExecInitAgg().Tom Lane2023-01-02
| | | | | | | | | | | | | | | | When considering an empty grouping set, we fetched phasedata->eqfunctions[-1]. Because the eqfunctions array is palloc'd, that would always be an aset pointer in released versions, and thus the code accidentally failed to malfunction (since it would do nothing unless it found a null pointer). Nonetheless this seems like trouble waiting to happen, so add a check for length == 0. It's depressing that our valgrind testing did not catch this. Maybe we should reconsider the choice to not mark that word NOACCESS? Richard Guo Discussion: https://postgr.es/m/CAMbWs4-vZuuPOZsKOYnSAaPYGKhmacxhki+vpOKk0O7rymccXQ@mail.gmail.com
* Remove new structure member from ResultRelInfo.Etsuro Fujita2022-12-08
| | | | | | | | | | | | | | | In commit ffbb7e65a, I added a ModifyTableState member to ResultRelInfo to save the owning ModifyTableState for use by nodeModifyTable.c when performing batch inserts, but as pointed out by Tom Lane, that changed the array stride of es_result_relations, and that would break any previously-compiled extension code that accesses that array. Fix by removing that member from ResultRelInfo and instead adding a List member at the end of EState to save such ModifyTableStates. Per report from Tom Lane. Back-patch to v14, like the previous commit; I chose to apply the patch to HEAD as well, to make back-patching easy. Discussion: http://postgr.es/m/4065383.1669395453%40sss.pgh.pa.us
* Prevent clobbering of utility statements in SQL function caches.Tom Lane2022-11-29
| | | | | | | | | | | | | | | | | | | | | | This is an oversight in commit 7c337b6b5: I apparently didn't think about the possibility of a SQL function being executed multiple times within a query. In that case, functions.c's primitive caching mechanism allows the same utility parse tree to be presented for execution more than once. We have to tell ProcessUtility to make a working copy of the parse tree, or bad things happen. Normally I'd add a regression test, but I think the reported crasher is dependent on some rather random implementation choices that are nowhere near functions.c, so its usefulness as a long-lived test feels questionable. In any case, this fix is clearly correct given the design choices of 7c337b6b5. Per bug #17702 from Xin Wen. Thanks to Daniel Gustafsson for analysis. Back-patch to v14 where the faulty commit came in (before that, the responsibility for copying scribble-able utility parse trees lay elsewhere). Discussion: https://postgr.es/m/17702-ad24fdcdd1e9047a@postgresql.org
* Fix handling of pending inserts in nodeModifyTable.c.Etsuro Fujita2022-11-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to nodeModifyTable.c code to flush pending inserts to the foreign-table result relation(s) before completing processing of the ModifyTable node, but the code failed to take into account the case where the INSERT query has modifying CTEs, leading to incorrect results. Also, that commit failed to flush pending inserts before firing BEFORE ROW triggers so that rows are visible to such triggers. In that commit we scanned through EState's es_tuple_routing_result_relations or es_opened_result_relations list to find the foreign-table result relations to which pending inserts are flushed, but that would be inefficient in some cases. So to fix, 1) add a List member to EState to record the insert-pending result relations, and 2) modify nodeModifyTable.c so that it adds the foreign-table result relation to the list in ExecInsert() if appropriate, and flushes pending inserts properly using the list where needed. While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(), which was added by that commit. Back-patch to v14 where that commit appeared. Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
* Fix copy-and-pasteo in comment.Etsuro Fujita2022-11-02
|
* Update comment in ExecInsert() regarding batch insertion.Etsuro Fujita2022-09-29
| | | | | | | | | | | | Remove the stale text that is a leftover from an earlier version of the patch to add support for batch insertion, and adjust the wording in the remaining text. Back-patch to v14 where batch insertion came in. Review and wording adjustment by Tom Lane. Discussion: https://postgr.es/m/CAPmGK14goatHPHQv2Aeu_UTKqZ%2BBO%2BP%2Bzd3HKv5D%2BdyyfWKDSw%40mail.gmail.com
* Future-proof the recursion inside ExecShutdownNode().Tom Lane2022-09-19
| | | | | | | | | | | | | | | | | | | | | | | The API contract for planstate_tree_walker() callbacks is that they take a PlanState pointer and a context pointer. Somebody figured they could save a couple lines of code by ignoring that, and passing ExecShutdownNode itself as the walker even though it has but one argument. Somewhat remarkably, we've gotten away with that so far. However, it seems clear that the upcoming C2x standard means to forbid such cases, and compilers that actively break such code likely won't be far behind. So spend the extra few lines of code to do it honestly with a separate walker function. In HEAD, we might as well go further and remove ExecShutdownNode's useless return value. I left that as-is in back branches though, to forestall complaints about ABI breakage. Back-patch, with the thought that this might become of practical importance before our stable branches are all out of service. It doesn't seem to be fixing any live bug on any currently known platform, however. Discussion: https://postgr.es/m/208054.1663534665@sss.pgh.pa.us
* Fix replica identity check for a partitioned table.Amit Kapila2022-08-16
| | | | | | | | | | | | | | The current publisher code checks if UPDATE or DELETE can be executed with the replica identity of the table even if it's a partitioned table. We can skip checking the replica identity for partitioned tables because the operations are actually performed on the leaf partitions (not the partitioned table). Reported-by: Brad Nicholson Author: Hou Zhijie Reviewed-by: Peter Smith, Amit Kapila Backpatch-through: 13 Discussion: https://postgr.es/m/CAMMnM%3D8i5DohH%3DYKzV0_wYuYSYvuOJoL9F5nzXTc%2ByzsG1f6rg%40mail.gmail.com
* Avoid misbehavior when hash_table_bytes < bucket_size.Tom Lane2022-08-13
| | | | | | | | | | | | | | | It's possible to reach this case when work_mem is very small and tupsize is (relatively) very large. In that case ExecChooseHashTableSize would get an assertion failure, or with asserts off it'd compute nbuckets = 0, which'd likely cause misbehavior later (I've not checked). To fix, clamp the number of buckets to be at least 1. This is due to faulty conversion of old my_log2() coding in 28d936031. Back-patch to v13, as that was. Zhang Mingli Discussion: https://postgr.es/m/beb64ca0-91e2-44ac-bf4a-7ea36275ec02@Spark
* Fix handling of R/W expanded datums that are passed to SQL functions.Tom Lane2022-08-10
| | | | | | | | | | | | | | | | | | | fmgr_sql must make expanded-datum arguments read-only, because it's possible that the function body will pass the argument to more than one callee function. If one of those functions takes the datum's R/W property as license to scribble on it, then later callees will see an unexpected value, leading to wrong answers. From a performance standpoint, it'd be nice to skip this in the common case that the argument value is passed to only one callee. However, detecting that seems fairly hard, and certainly not something that I care to attempt in a back-patched bug fix. Per report from Adam Mackler. This has been broken since we invented expanded datums, so back-patch to all supported branches. Discussion: https://postgr.es/m/WScDU5qfoZ7PB2gXwNqwGGgDPmWzz08VdydcPFLhOwUKZcdWbblbo-0Lku-qhuEiZoXJ82jpiQU4hOjOcrevYEDeoAvz6nR0IU4IHhXnaCA=@mackler.email Discussion: https://postgr.es/m/187436.1660143060@sss.pgh.pa.us
* Add CHECK_FOR_INTERRUPTS in ExecInsert's speculative insertion loop.Tom Lane2022-08-04
| | | | | | | | | | | | | Ordinarily the functions called in this loop ought to have plenty of CFIs themselves; but we've now seen a case where no such CFI is reached, making the loop uninterruptible. Even though that's from a recently-introduced bug, it seems prudent to install a CFI at the loop level in all branches. Per discussion of bug #17558 from Andrew Kesper (an actual fix for that bug will follow). Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
* Re-add SPICleanup for ABI compatibility in stable branchPeter Eisentraut2022-07-18
| | | | | | | | This fixes an ABI break introduced by 604651880c71c5106a72529b9ce29eaad0cfab27. Author: Markus Wanner <markus.wanner@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/defd749a-8410-841d-1126-21398686d63d@enterprisedb.com
* Fix SPI's handling of errors during transaction commit.Tom Lane2022-06-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SPI_commit previously left it up to the caller to recover from any error occurring during commit. Since that's complicated and requires use of low-level xact.c facilities, it's not too surprising that no caller got it right. Let's move the responsibility for cleanup into spi.c. Doing that requires redefining SPI_commit as starting a new transaction, so that it becomes equivalent to SPI_commit_and_chain except that you get default transaction characteristics instead of preserving the prior transaction's characteristics. We can make this pretty transparent API-wise by redefining SPI_start_transaction() as a no-op. Callers that expect to do something in between might be surprised, but available evidence is that no callers do so. Having made that API redefinition, we can fix this mess by having SPI_commit[_and_chain] trap errors and start a new, clean transaction before re-throwing the error. Likewise for SPI_rollback[_and_chain]. Some cleanup is also needed in AtEOXact_SPI, which was nowhere near smart enough to deal with SPI contexts nested inside a committing context. While plperl and pltcl need no changes beyond removing their now-useless SPI_start_transaction() calls, plpython needs some more work because it hadn't gotten the memo about catching commit/rollback errors in the first place. Such an error resulted in longjmp'ing out of the Python interpreter, which leaks Python stack entries at present and is reported to crash Python 3.11 altogether. Add the missing logic to catch such errors and convert them into Python exceptions. This is a back-patch of commit 2e517818f. That's now aged long enough to reduce the concerns about whether it will break something, and we do need to ensure that supported branches will work with Python 3.11. Peter Eisentraut and Tom Lane Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
* Un-break whole-row Vars referencing domain-over-composite types.Tom Lane2022-06-10
| | | | | | | | | | | | | | | | | In commit ec62cb0aa, I foolishly replaced ExecEvalWholeRowVar's lookup_rowtype_tupdesc_domain call with just lookup_rowtype_tupdesc, because I didn't see how a domain could be involved there, and there were no regression test cases to jog my memory. But the existing code was correct, so revert that change and add a test case showing why it's necessary. (Note: per comment in struct DatumTupleFields, it is correct to produce an output tuple that's labeled with the base composite type, not the domain; hence just blindly looking through the domain is correct here.) Per bug #17515 from Dan Kubb. Back-patch to v11 where domains over composites became a thing. Discussion: https://postgr.es/m/17515-a24737438363aca0@postgresql.org
* Harden Memoization code against broken data typesDavid Rowley2022-06-08
| | | | | | | | | | | | | | | | | | | | | Bug #17512 highlighted that a suitably broken data type could cause the backend to crash if either the hash function or equality function were in someway non-deterministic based on their input values. Such a data type could cause a crash of the backend due to some code which assumes that we'll always find a hash table entry corresponding to an item in the Memoize LRU list. Here we remove the assumption that we'll always find the entry corresponding to the given LRU list item and add run-time checks to verify we have found the given item in the cache. This is not a fix for bug #17512, but it will turn the crash reported by that bug report into an internal ERROR. Reported-by: Ales Zeleny Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com Backpatch-through: 14, where Memoize was added.
* Revert applying column aliases to the output of whole-row Vars.Tom Lane2022-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit bf7ca1587, I had the bright idea that we could make the result of a whole-row Var (that is, foo.*) track any column aliases that had been applied to the FROM entry the Var refers to. However, that's not terribly logically consistent, because now the output of the Var is no longer of the named composite type that the Var claims to emit. bf7ca1587 tried to handle that by changing the output tuple values to be labeled with a blessed RECORD type, but that's really pretty disastrous: we can wind up storing such tuples onto disk, whereupon they're not readable by other sessions. The only practical fix I can see is to give up on what bf7ca1587 tried to do, and say that the column names of tuples produced by a whole-row Var are always those of the underlying named composite type, query aliases or no. While this introduces some inconsistencies, it removes others, so it's not that awful in the abstract. What *is* kind of awful is to make such a behavioral change in a back-patched bug fix. But corrupt data is worse, so back-patched it will be. (A workaround available to anyone who's unhappy about this is to introduce an extra level of sub-SELECT, so that the whole-row Var is referring to the sub-SELECT's output and not to a named table type. Then the Var is of type RECORD to begin with and there's no issue.) Per report from Miles Delahunty. The faulty commit dates to 9.5, so back-patch to all supported branches. Discussion: https://postgr.es/m/2950001.1638729947@sss.pgh.pa.us
* Fix memory leak in IndexScan node with reorderingAlexander Korotkov2022-02-14
| | | | | | | | | | Fix ExecReScanIndexScan() to free the referenced tuples while emptying the priority queue. Backpatch to all supported versions. Discussion: https://postgr.es/m/CAHqSB9gECMENBQmpbv5rvmT3HTaORmMK3Ukg73DsX5H7EJV7jw%40mail.gmail.com Author: Aliaksandr Kalenik Reviewed-by: Tom Lane, Alexander Korotkov Backpatch-through: 10
* Test, don't just Assert, that mergejoin's inputs are in order.Tom Lane2022-02-05
| | | | | | | | | | | | | | | | | There are two Asserts in nodeMergejoin.c that are reachable if the input data is not in the expected order. This seems way too fragile. Alexander Lakhin reported a case where the assertions could be triggered with misconfigured foreign-table partitions, and bitter experience with unstable operating system collation definitions suggests another easy route to hitting them. Neither Assert is in a place where we can't afford one more test-and-branch, so replace 'em with plain test-and-elog logic. Per bug #17395. While the reported symptom is relatively recent, collation changes could happen anytime, so back-patch to all supported branches. Discussion: https://postgr.es/m/17395-8c326292078d1a57@postgresql.org
* Fix compiler warning in non-assert builds, introduced in f862d57057f.Andres Freund2022-02-03
| | | | | Discussion: https://postgr.es/m/20220203183655.ralgkh54sdcgysmn@alap3.anarazel.de Backpatch: 14-, like f862d57057f
* Further fix for EvalPlanQual with mix of local and foreign partitions.Etsuro Fujita2022-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We assume that direct-modify ForeignScan nodes cannot be re-evaluated during EvalPlanQual processing, but the rework for inherited UPDATE/DELETE in commit 86dc90056 changed things, without considering that, so that such ForeignScan nodes get called as part of the EvalPlanQual subtree during EvalPlanQual processing in the case of an inherited UPDATE/DELETE where the inheritance set contains foreign target relations. To avoid re-evaluating such ForeignScan nodes during EvalPlanQual processing, commit c3928b467 modified nodeForeignscan.c, but the assumption made there that ExecForeignScan() should never be called for such ForeignScan nodes during EvalPlanQual processing turned out to be wrong in some cases, leading to a segmentation fault or a "cannot re-evaluate a Foreign Update or Delete during EvalPlanQual" error. Fix by modifying nodeForeignscan.c further to avoid re-evaluating such ForeignScan nodes even in ExecForeignScan()/ExecReScanForeignScan() during EvalPlanQual processing. Since this makes non-reachable the test-and-elog added to ForeignNext() by commit c3928b467 that produced the aforesaid error, convert the test-and-elog to an Assert. Per bug #17355 from Alexander Lakhin. Back-patch to v14 where both commits came in. Patch by me, reviewed and tested by Alexander Lakhin and Amit Langote. Discussion: https://postgr.es/m/17355-de8e362eb7001a96@postgresql.org
* Fix typo in comment.Etsuro Fujita2022-01-28
|
* Fix memory leak in indexUnchanged hint mechanism.Peter Geoghegan2022-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9dc718bd added a "logically unchanged by UPDATE" hinting mechanism, which is currently used within nbtree indexes only (see commit d168b666). This mechanism determined whether or not the incoming item is a logically unchanged duplicate (a duplicate needed only for MVCC versioning purposes) once per row updated per non-HOT update. This approach led to memory leaks which were noticeable with an UPDATE statement that updated sufficiently many rows, at least on tables that happen to have an expression index. On HEAD, fix the issue by adding a cache to the executor's per-index IndexInfo struct. Take a different approach on Postgres 14 to avoid an ABI break: simply pass down the hint to all indexes unconditionally with non-HOT UPDATEs. This is deemed acceptable because the hint is currently interpreted within btinsert() as "perform a bottom-up index deletion pass if and when the only alternative is splitting the leaf page -- prefer to delete any LP_DEAD-set items first". nbtree must always treat the hint as a noisy signal about what might work, as a strategy of last resort, with costs imposed on non-HOT updaters. (The same thing might not be true within another index AM that applies the hint, which is why the original behavior is preserved on HEAD.) Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Klaudie Willis <Klaudie.Willis@protonmail.com> Diagnosed-By: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/261065.1639497535@sss.pgh.pa.us Backpatch: 14-, where the hinting mechanism was added.
* Fix index-only scan plans, take 2.Tom Lane2022-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4ace45677 failed to fix the problem fully, because the same issue of attempting to fetch a non-returnable index column can occur when rechecking the indexqual after using a lossy index operator. Moreover, it broke EXPLAIN for such indexquals (which indicates a gap in our test cases :-(). Revert the code changes of 4ace45677 in favor of adding a new field to struct IndexOnlyScan, containing a version of the indexqual that can be executed against the index-returned tuple without using any non-returnable columns. (The restrictions imposed by check_index_only guarantee this is possible, although we may have to recompute indexed expressions.) Support construction of that during setrefs.c processing by marking IndexOnlyScan.indextlist entries as resjunk if they can't be returned, rather than removing them entirely. (We could alternatively require setrefs.c to look up the IndexOptInfo again, but abusing resjunk this way seems like a reasonably safe way to avoid needing to do that.) This solution isn't great from an API-stability standpoint: if there are any extensions out there that build IndexOnlyScan structs directly, they'll be broken in the next minor releases. However, only a very invasive extension would be likely to do such a thing. There's no change in the Path representation, so typical planner extensions shouldn't have a problem. As before, back-patch to all supported branches. Discussion: https://postgr.es/m/3179992.1641150853@sss.pgh.pa.us Discussion: https://postgr.es/m/17350-b5bdcf476e5badbb@postgresql.org
* Flush Memoize cache when non-key parameters change, take 2David Rowley2021-11-24
| | | | | | | | | | | | | | | | It's possible that a subplan below a Memoize node contains a parameter from above the Memoize node. If this parameter changes then cache entries may become out-dated due to the new parameter value. Previously Memoize was mistakenly not aware of this. We fix this here by flushing the cache whenever a parameter that's not part of the cache key changes. Bug: #17213 Reported by: Elvis Pranskevichus Author: David Rowley Discussion: https://postgr.es/m/17213-988ed34b225a2862@postgresql.org Backpatch-through: 14, where Memoize was added
* Revert "Flush Memoize cache when non-key parameters change"David Rowley2021-11-24
| | | | This reverts commit f94edb06ab60545a0e9e90aedc39e3b81b2b9e3f.
* Flush Memoize cache when non-key parameters changeDavid Rowley2021-11-24
| | | | | | | | | | | | | | | | It's possible that a subplan below a Memoize node contains a parameter from above the Memoize node. If this parameter changes then cache entries may become out-dated due to the new parameter value. Previously Memoize was mistakenly not aware of this. We fix this here by flushing the cache whenever a parameter that's not part of the cache key changes. Bug: #17213 Reported by: Elvis Pranskevichus Author: David Rowley Discussion: https://postgr.es/m/17213-988ed34b225a2862@postgresql.org Backpatch-through: 14, where Memoize was added
* Allow Memoize to operate in binary comparison modeDavid Rowley2021-11-24
| | | | | | | | | | | | | | | | | | | | | | Memoize would always use the hash equality operator for the cache key types to determine if the current set of parameters were the same as some previously cached set. Certain types such as floating points where -0.0 and +0.0 differ in their binary representation but are classed as equal by the hash equality operator may cause problems as unless the join uses the same operator it's possible that whichever join operator is being used would be able to distinguish the two values. In which case we may accidentally return in the incorrect rows out of the cache. To fix this here we add a binary mode to Memoize to allow it to the current set of parameters to previously cached values by comparing bit-by-bit rather than logically using the hash equality operator. This binary mode is always used for LATERAL joins and it's used for normal joins when any of the join operators are not hashable. Reported-by: Tom Lane Author: David Rowley Discussion: https://postgr.es/m/3004308.1632952496@sss.pgh.pa.us Backpatch-through: 14, where Memoize was added
* Fix variable lifespan in ExecInitCoerceToDomain().Tom Lane2021-11-02
| | | | | | | | | | | | This undoes a mistake in 1ec7679f1: domainval and domainnull were meant to live across loop iterations, but they were incorrectly moved inside the loop. The effect was only to emit useless extra EEOP_MAKE_READONLY steps, so it's not a big deal; nonetheless, back-patch to v13 where the mistake was introduced. Ranier Vilela Discussion: https://postgr.es/m/CAEudQAqXuhbkaAp-sGH6dR6Nsq7v28_0TPexHOm6FiDYqwQD-w@mail.gmail.com
* Avoid some other O(N^2) hazards in list manipulation.Tom Lane2021-11-01
| | | | | | | | | | | | In the same spirit as 6301c3ada, fix some more places where we were using list_delete_first() in a loop and thereby risking O(N^2) behavior. It's not clear that the lists manipulated in these spots can get long enough to be really problematic ... but it's not clear that they can't, either, and the fixes are simple enough. As before, back-patch to v13. Discussion: https://postgr.es/m/CD2F0E7F-9822-45EC-A411-AE56F14DEA9F@amazon.com
* Fix assignment to array of domain over composite.Tom Lane2021-10-19
| | | | | | | | | | | | | | | An update such as "UPDATE ... SET fld[n].subfld = whatever" failed if the array elements were domains rather than plain composites. That's because isAssignmentIndirectionExpr() failed to cope with the CoerceToDomain node that would appear in the expression tree in this case. The result would typically be a crash, and even if we accidentally didn't crash, we'd not correctly preserve other fields of the same array element. Per report from Onder Kalaci. Back-patch to v11 where arrays of domains came in. Discussion: https://postgr.es/m/PH0PR21MB132823A46AA36F0685B7A29AD8BD9@PH0PR21MB1328.namprd21.prod.outlook.com
* Fix checking of query type in plpgsql's RETURN QUERY command.Tom Lane2021-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to v14, we insisted that the query in RETURN QUERY be of a type that returns tuples. (For instance, INSERT RETURNING was allowed, but not plain INSERT.) That happened indirectly because we opened a cursor for the query, so spi.c checked SPI_is_cursor_plan(). As a consequence, the error message wasn't terribly on-point, but at least it was there. Commit 2f48ede08 lost this detail. Instead, plain RETURN QUERY insisted that the query be a SELECT (by checking for SPI_OK_SELECT) while RETURN QUERY EXECUTE failed to check the query type at all. Neither of these changes was intended. The only convenient place to check this in the EXECUTE case is inside _SPI_execute_plan, because we haven't done parse analysis until then. So we need to pass down a flag saying whether to enforce that the query returns tuples. Fortunately, we can squeeze another boolean into struct SPIExecuteOptions without an ABI break, since there's padding space there. (It's unlikely that any extensions would already be using this new struct, but preserving ABI in v14 seems like a smart idea anyway.) Within spi.c, it seemed like _SPI_execute_plan's parameter list was already ridiculously long, and I didn't want to make it longer. So I thought of passing SPIExecuteOptions down as-is, allowing that parameter list to become much shorter. This makes the patch a bit more invasive than it might otherwise be, but it's all internal to spi.c, so that seems fine. Per report from Marc Bachmann. Back-patch to v14 where the faulty code came in. Discussion: https://postgr.es/m/1F2F75F0-27DF-406F-848D-8B50C7EEF06A@gmail.com
* Fix missing words in comment.Heikki Linnakangas2021-09-07
| | | | | | | Introduced by commit c3928b467a, backpatch to v14 like that one. Author: Amit Langote Discussion: https://www.postgresql.org/message-id/CA+HiwqFQgNLS6VGntMcuJV6erBFV425xA6wBVnY=41GK4zC0Bw@mail.gmail.com
* Improve error messages about misuse of SELECT INTO.Tom Lane2021-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | Improve two places in plpgsql, and one in spi.c, where an error message would confusingly tell you that you couldn't use a SELECT query, when what you had written *was* a SELECT query. The actual problem is that you can't use SELECT ... INTO in these contexts, but the messages failed to make that apparent. Special-case SELECT INTO to make these errors more helpful. Also, fix the same spots in plpgsql, as well as several messages in exec_eval_expr(), to not quote the entire complained-of query or expression in the primary error message. That behavior very easily led to violating our message style guideline about keeping the primary error message short and single-line. Also, since the important part of the message was after the inserted text, it could make the real problem very hard to see. We can report the query or expression as the first line of errcontext instead. Per complaint from Roger Mason. Back-patch to v14, since (a) some of these messages are new in v14 and (b) v14's translatable strings are still somewhat in flux. The problem's older than that of course, but I'm hesitant to change the behavior further back. Discussion: https://postgr.es/m/1914708.1629474624@sss.pgh.pa.us
* Use appropriate tuple descriptor in FDW batchingTomas Vondra2021-08-12
| | | | | | | | | | | | | | | The FDW batching code was using the same tuple descriptor both for all slots (regular and plan slots), but that's incorrect - the subplan may use a different descriptor. Currently this is benign, because batching is used only for INSERTs, and in that case the descriptors always match. But that would change if we allow batching UPDATEs. Fix by copying the appropriate tuple descriptor. Backpatch to 14, where the FDW batching was implemented. Author: Amit Langote Backpatch-through: 14, where FDW batching was added Discussion: https://postgr.es/m/CA%2BHiwqEWd5B0-e-RvixGGUrNvGkjH2s4m95%3DJcwUnyV%3Df0rAKQ%40mail.gmail.com
* Fix segfault during EvalPlanQual with mix of local and foreign partitions.Heikki Linnakangas2021-08-12
| | | | | | | | | | | | | | | | | | | | | It's not sensible to re-evaluate a direct-modify Foreign Update or Delete during EvalPlanQual. However, ExecInitForeignScan() can still get called if a table mixes local and foreign partitions. EvalPlanQualStart() left the es_result_relations array uninitialized in the child EPQ EState, but ExecInitForeignScan() still expected to find it. That caused a segfault. Fix by skipping the es_result_relations lookup during EvalPlanQual processing. To make things a bit more robust, also skip the BeginDirectModify calls, and add a runtime check that ExecForeignScan() is not called on direct-modify foreign scans during EvalPlanQual processing. This is new in v14, commit 1375422c782. Before that, EvalPlanQualStart() copied the whole ResultRelInfo array to the EPQ EState. Backpatch to v14. Report and diagnosis by Andrey Lepikhov. Discussion: https://www.postgresql.org/message-id/cb2b808d-cbaa-4772-76ee-c8809bafcf3d%40postgrespro.ru
* Fix oversight in commit 1ec7fca8592178281cd5cdada0f27a340fb813fc.Etsuro Fujita2021-08-02
| | | | | | | | | | | | | | | | | I failed to account for the possibility that when ExecAppendAsyncEventWait() notifies multiple async-capable nodes using postgres_fdw, a preceding node might invoke process_pending_request() to process a pending asynchronous request made by a succeeding node. In that case the succeeding node should produce a tuple to return to the parent Append node from tuples fetched by process_pending_request() when notified. Repair. Per buildfarm via Michael Paquier. Back-patch to v14, like the previous commit. Thanks to Tom Lane for testing. Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
* postgres_fdw: Fix handling of pending asynchronous requests.Etsuro Fujita2021-07-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A pending asynchronous request is handled by process_pending_request(), which previously not only processed an in-progress remote query but performed ExecForeignScan() to produce a tuple to return to the local server asynchronously from the result of the remote query. But that led to a server crash when executing a query or led to an "InstrStartNode called twice in a row" or "InstrEndLoop called on running node" failure when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it contained multiple async-capable nodes accessing the same initplan/subplan that contained multiple async-capable nodes scanning the same foreign tables as for the parent async-capable nodes, as reported by Andrey Lepikhov. The reason is that the second step in process_pending_request() invoked when executing the initplan/subplan for one of the parent async-capable nodes caused recursive execution of the initplan/subplan for another of the parent async-capable nodes. To fix, split process_pending_request() into the two steps and postpone the second step until ForeignAsyncConfigureWait() is called for each of the pending asynchronous requests. Also, in ExecAppendAsyncEventWait() we assumed that FDWs would register at least one wait event in a WaitEventSet created there when they were called from ForeignAsyncConfigureWait() in that function, but allow FDWs to register zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait() to just return in that case. Oversight in commit 27e1f1456. Back-patch to v14 where that commit went in. Andrey Lepikhov and Etsuro Fujita Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru