aboutsummaryrefslogtreecommitdiff
path: root/src/backend/optimizer/plan/createplan.c
Commit message (Collapse)AuthorAge
* Track the number of presorted outer pathkeys in MergePathRichard Guo11 days
| | | | | | | | | | | | | | | | When creating an explicit Sort node for the outer path of a mergejoin, we need to determine the number of presorted keys of the outer path to decide whether explicit incremental sort can be applied. Currently, this is done by repeatedly calling pathkeys_count_contained_in. This patch caches the number of presorted outer pathkeys in MergePath, allowing us to save several calls to pathkeys_count_contained_in. It can be considered a complement to the changes in commit 828e94c9d. Reported-by: David Rowley <dgrowleyml@gmail.com> Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/CAApHDvqvBireB_w6x8BN5txdvBEHxVgZBt=rUnpf5ww5P_E_ww@mail.gmail.com
* Relax ordering-related hardcoded btree requirements in planningPeter Eisentraut2025-04-06
| | | | | | | | | | | | | | | | | | | | There were several places in ordering-related planning where a requirement for btree was hardcoded but an amcanorder index could suffice. This fixes that. We just need to do the necessary mapping between strategy numbers and compare types and adjust some related APIs so that this works independent of btree strategy numbers. For instance, non-btree amcanorder indexes can now be used to support sorting and merge joins. Also, predtest.c works independent of btree strategy numbers now. To avoid performance regressions, some details on btree and other built-in index types are still hardcoded as shortcuts, but other index types now have access to the same features by providing the required flags and callbacks. Author: Mark Dilger <mark.dilger@enterprisedb.com> Co-authored-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Convert PathKey to use CompareTypePeter Eisentraut2025-04-04
| | | | | | | | | | | | | | | Change the PathKey struct to use CompareType to record the sort direction instead of hardcoding btree strategy numbers. The CompareType is then converted to the index-type-specific strategy when the plan is created. This reduces the number of places btree strategy numbers are hardcoded, and it's a self-contained subset of a larger effort to allow non-btree indexes to behave like btrees. Author: Mark Dilger <mark.dilger@enterprisedb.com> Co-authored-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Improve EXPLAIN's display of window functions.Tom Lane2025-03-11
| | | | | | | | | | | | | | | | | | | | | | Up to now we just punted on showing the window definitions used in a plan, with window function calls represented as "OVER (?)". To improve that, show the window definition implemented by each WindowAgg plan node, and reference their window names in OVER. For nameless window clauses generated by "OVER (...)", assign unique names w1, w2, etc. In passing, re-order the properties shown for a WindowAgg node so that the Run Condition (if any) appears after the Window property and before the Filter (if any). This seems more sensible since the Run Condition is associated with the Window and acts before the Filter. Thanks to David G. Johnston and Álvaro Herrera for design suggestions. Author: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/144530.1741469955@sss.pgh.pa.us
* Move PartitionPruneInfo out of plan nodes into PlannedStmtAmit Langote2025-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This moves PartitionPruneInfo from plan nodes to PlannedStmt, simplifying traversal by centralizing all PartitionPruneInfo structures in a single list in it, which holds all instances for the main query and its subqueries. Instead of plan nodes (Append or MergeAppend) storing PartitionPruneInfo pointers, they now reference an index in this list. A bitmapset field is added to PartitionPruneInfo to store the RT indexes corresponding to the apprelids field in Append or MergeAppend. This allows execution pruning logic to verify that it operates on the correct plan node, mainly to facilitate debugging. Duplicated code in set_append_references() and set_mergeappend_references() is refactored into a new function, register_pruneinfo(). This updates RT indexes by applying rtoffet and adds PartitionPruneInfo to the global list in PlannerGlobal. By allowing pruning to be performed without traversing the plan tree, this change lays the groundwork for runtime initial pruning to occur independently of plan tree initialization. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> (earlier version) Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
* Add OLD/NEW support to RETURNING in DML queries.Dean Rasheed2025-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows the RETURNING list of INSERT/UPDATE/DELETE/MERGE queries to explicitly return old and new values by using the special aliases "old" and "new", which are automatically added to the query (if not already defined) while parsing its RETURNING list, allowing things like: RETURNING old.colname, new.colname, ... RETURNING old.*, new.* Additionally, a new syntax is supported, allowing the names "old" and "new" to be changed to user-supplied alias names, e.g.: RETURNING WITH (OLD AS o, NEW AS n) o.colname, n.colname, ... This is useful when the names "old" and "new" are already defined, such as inside trigger functions, allowing backwards compatibility to be maintained -- the interpretation of any existing queries that happen to already refer to relations called "old" or "new", or use those as aliases for other relations, is not changed. For an INSERT, old values will generally be NULL, and for a DELETE, new values will generally be NULL, but that may change for an INSERT with an ON CONFLICT ... DO UPDATE clause, or if a query rewrite rule changes the command type. Therefore, we put no restrictions on the use of old and new in any DML queries. Dean Rasheed, reviewed by Jian He and Jeff Davis. Discussion: https://postgr.es/m/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com
* Update copyright for 2025Bruce Momjian2025-01-01
| | | | Backpatch-through: 13
* Convert SetOp to read its inputs as outerPlan and innerPlan.Tom Lane2024-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original design for set operations involved appending the two input relations into one and adding a flag column that allows distinguishing which side each row came from. Then the SetOp node pries them apart again based on the flag. This is bizarre. The only apparent reason to do it is that when sorting, we'd only need one Sort node not two. But since sorting is at least O(N log N), sorting all the data is actually worse than sorting each side separately --- plus, we have no chance of taking advantage of presorted input. On top of that, adding the flag column frequently requires an additional projection step that adds cycles, and then the Append node isn't free either. Let's get rid of all of that and make the SetOp node have two separate children, using the existing outerPlan/innerPlan infrastructure. This initial patch re-implements nodeSetop.c and does a bare minimum of work on the planner side to generate correctly-shaped plans. In particular, I've tried not to change the cost estimates here, so that the visible changes in the regression test results will only involve removal of useless projection steps and not any changes in whether to use sorted vs hashed mode. For SORTED mode, we combine successive identical tuples from each input into groups, and then merge-join the groups. The tuple comparisons now use SortSupport instead of simple equality, but the group-formation part should involve roughly the same number of tuple comparisons as before. The cross-comparisons between left and right groups probably add to that, but I'm not sure to quantify how many more comparisons we might need. For HASHED mode, nodeSetop's logic is almost the same as before, just refactored into two separate loops instead of one loop that has an assumption that it will see all the left-hand inputs first. In both modes, I added early-exit logic to not bother reading the right-hand relation if the left-hand input is empty, since neither INTERSECT nor EXCEPT modes can produce any output if the left input is empty. This could have been done before in the hashed mode, but not in sorted mode. Sorted mode can also stop as soon as it exhausts the left input; any remaining right-hand tuples cannot have matches. Also, this patch adds some infrastructure for detecting whether child plan nodes all output the same type of tuple table slot. If they do, the hash table logic can use slightly more efficient code based on assuming that that's the input slot type it will see. We'll make use of that infrastructure in other plan node types later. Patch by me; thanks to Richard Guo and David Rowley for review. Discussion: https://postgr.es/m/1850138.1731549611@sss.pgh.pa.us
* Remove useless casts to (void *)Peter Eisentraut2024-11-28
| | | | | | | | Many of them just seem to have been copied around for no real reason. Their presence causes (small) risks of hiding actual type mismatches or silently discarding qualifiers Discussion: https://www.postgresql.org/message-id/flat/461ea37c-8b58-43b4-9736-52884e862820@eisentraut.org
* Track scan reversals in MergeJoinPeter Eisentraut2024-10-14
| | | | | | | | | | | | | The MergeJoin struct was tracking "mergeStrategies", which were an array of btree strategy numbers, purely for the purpose of comparing it later against btree strategies to determine if the scan direction was forward or reverse. Change that. Instead, track "mergeReversals", an array of bool, to indicate the same without an unfortunate assumption that a strategy number refers specifically to a btree strategy. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Track sort direction in SortGroupClausePeter Eisentraut2024-10-14
| | | | | | | | | | | | | | Functions make_pathkey_from_sortop() and transformWindowDefinitions(), which receive a SortGroupClause, were determining the sort order (ascending vs. descending) by comparing that structure's operator strategy to BTLessStrategyNumber, but could just as easily have gotten it from the SortGroupClause object, if it had such a field, so add one. This reduces the number of places that hardcode the assumption that the strategy refers specifically to a btree strategy, rather than some other index AM's operators. Author: Mark Dilger <mark.dilger@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
* Adjust EXPLAIN's output for disabled nodesDavid Rowley2024-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | c01743aa4 added EXPLAIN output to display the plan node's disabled_node count whenever that count is above 0. Seemingly, there weren't many people who liked that output as each parent of a disabled node would also have a "Disabled Nodes" output due to the way disabled_nodes is accumulated towards the root plan node. It was often hard and sometimes impossible to figure out which nodes were disabled from looking at EXPLAIN. You might think it would be possible to manually add up the numbers from the "Disabled Nodes" output of a given node's children to figure out if that node has a higher disabled_nodes count than its children, but that wouldn't have worked for Append and Merge Append nodes if some disabled child nodes were run-time pruned during init plan. Those children are not displayed in EXPLAIN. Here we attempt to improve this output by only showing "Disabled: true" against only the nodes which are explicitly disabled themselves. That seems to be the output that's desired by the most people who voiced their opinion. This is done by summing up the disabled_nodes of the given node's children and checking if that number is less than the disabled_nodes of the current node. This commit also fixes a bug in make_sort() which was neglecting to set the Sort's disabled_nodes field. This should have copied what was done in cost_sort(), but it hadn't been updated. With the new output, the choice to not maintain that field properly was clearly wrong as the disabled-ness of the node was attributed to the Sort's parent instead. Reviewed-by: Laurenz Albe, Alena Rybakina Discussion: https://postgr.es/m/9e4ad616bebb103ec2084bf6f724cfc739e7fabb.camel@cybertec.at
* Consider explicit incremental sort for mergejoinsRichard Guo2024-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a mergejoin, if the given outer path or inner path is not already well enough ordered, we need to do an explicit sort. Currently, we only consider explicit full sort and do not account for incremental sort. In this patch, for the outer path of a mergejoin, we choose to use explicit incremental sort if it is enabled and there are presorted keys. For the inner path, though, we cannot use incremental sort because it does not support mark/restore at present. The rationale is based on the assumption that incremental sort is always faster than full sort when there are presorted keys, a premise that has been applied in various parts of the code. In addition, the current cost model tends to favor incremental sort as being cheaper than full sort in the presence of presorted keys, making it reasonable not to consider full sort in such cases. It could be argued that what if a mergejoin with an incremental sort as the outer path is selected as the inner path of another mergejoin. However, this should not be a problem, because mergejoin itself does not support mark/restore either, and we will add a Material node on top of it anyway in this case (see final_cost_mergejoin). There is one ensuing plan change in the regression tests, and we have to modify that test case to ensure that it continues to test what it is intended to. No backpatch as this could result in plan changes. Author: Richard Guo Reviewed-by: David Rowley, Tomas Vondra Discussion: https://postgr.es/m/CAMbWs49x425QrX7h=Ux05WEnt8GS757H-jOP3_xsX5t1FoUsZw@mail.gmail.com
* Fix order of parameters in a cost_sort callRichard Guo2024-09-09
| | | | | | | | | | | | | | | | In label_sort_with_costsize, the cost_sort function is called with the parameters 'input_disabled_nodes' and 'input_cost' in the wrong order. This does not cause any plan diffs in the regression tests, because label_sort_with_costsize is only used to label the Sort node nicely for EXPLAIN, and cost numbers are not displayed in regression tests. Oversight in e22253467. Fixed by passing arguments in the right order. Per report from Alexander Lakhin running UBSan. Author: Alexander Lakhin Discussion: https://postgr.es/m/a9b7231d-68bc-f117-a07c-96688f3e6aef@gmail.com
* Show number of disabled nodes in EXPLAIN ANALYZE output.Robert Haas2024-08-21
| | | | | | | | | | | | | | | | Now that disable_cost is not included in the cost estimate, there's no visible sign in EXPLAIN output of which plan nodes are disabled. Fix that by propagating the number of disabled nodes from Path to Plan, and then showing it in the EXPLAIN output. There is some question about whether this is a desirable change. While I personally believe that it is, it seems best to make it a separate commit, in case we decide to back out just this part, or rework it. Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley. Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
* Treat number of disabled nodes in a path as a separate cost metric.Robert Haas2024-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, when a path type was disabled by e.g. enable_seqscan=false, we either avoided generating that path type in the first place, or more commonly, we added a large constant, called disable_cost, to the estimated startup cost of that path. This latter approach can distort planning. For instance, an extremely expensive non-disabled path could seem to be worse than a disabled path, especially if the full cost of that path node need not be paid (e.g. due to a Limit). Or, as in the regression test whose expected output changes with this commit, the addition of disable_cost can make two paths that would normally be distinguishible in cost seem to have fuzzily the same cost. To fix that, we now count the number of disabled path nodes and consider that a high-order component of both the startup cost and the total cost. Hence, the path list is now sorted by disabled_nodes and then by total_cost, instead of just by the latter, and likewise for the partial path list. It is important that this number is a count and not simply a Boolean; else, as soon as we're unable to respect disabled path types in all portions of the path, we stop trying to avoid them where we can. Because the path list is now sorted by the number of disabled nodes, the join prechecks must compute the count of disabled nodes during the initial cost phase instead of postponing it to final cost time. Counts of disabled nodes do not cross subquery levels; at present, there is no reason for them to do so, since the we do not postpone path selection across subquery boundaries (see make_subplan). Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley. Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
* Fix "failed to find plan for subquery/CTE" errors in EXPLAIN.Tom Lane2024-08-09
| | | | | | | | | | | | | | | | | | | | | | To deparse a reference to a field of a RECORD-type output of a subquery, EXPLAIN normally digs down into the subquery's plan to try to discover exactly which anonymous RECORD type is meant. However, this can fail if the subquery has been optimized out of the plan altogether on the grounds that no rows could pass the WHERE quals, which has been possible at least since 3fc6e2d7f. There isn't anything remaining in the plan tree that would help us, so fall back to printing the field name as "fN" for the N'th column of the record. (This will actually be the right thing some of the time, since it matches the column names we assign to RowExprs.) In passing, fix a comment typo in create_projection_plan, which I noticed while experimenting with an alternative fix for this. Per bug #18576 from Vasya B. Back-patch to all supported branches. Richard Guo and Tom Lane Discussion: https://postgr.es/m/18576-9feac34e132fea9e@postgresql.org
* Restrict accesses to non-system views and foreign tables during pg_dump.Masahiko Sawada2024-08-05
| | | | | | | | | | | | | | | | | | | | | | | | When pg_dump retrieves the list of database objects and performs the data dump, there was possibility that objects are replaced with others of the same name, such as views, and access them. This vulnerability could result in code execution with superuser privileges during the pg_dump process. This issue can arise when dumping data of sequences, foreign tables (only 13 or later), or tables registered with a WHERE clause in the extension configuration table. To address this, pg_dump now utilizes the newly introduced restrict_nonsystem_relation_kind GUC parameter to restrict the accesses to non-system views and foreign tables during the dump process. This new GUC parameter is added to back branches too, but these changes do not require cluster recreation. Back-patch to all supported branches. Reviewed-by: Noah Misch Security: CVE-2024-7348 Backpatch-through: 12
* Remove redundant code in create_gather_merge_pathRichard Guo2024-07-23
| | | | | | | | | | | | | | | | In create_gather_merge_path, we should always guarantee that the subpath is adequately ordered, and we do not add a Sort node in createplan.c for a Gather Merge node. Therefore, the 'else' branch in create_gather_merge_path, which computes the cost for a Sort node, is redundant. This patch removes the redundant code and emits an error if the subpath is not sufficiently ordered. Meanwhile, this patch changes the check for the subpath's pathkeys in create_gather_merge_plan to an Assert. Author: Richard Guo Discussion: https://postgr.es/m/CAMbWs48u=0bWf3epVtULjJ-=M9Hbkz+ieZQAOS=BfbXZFqbDCg@mail.gmail.com
* Fix query pullup issue with WindowClause runConditionDavid Rowley2024-05-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 94985c210 added code to detect when WindowFuncs were monotonic and allowed additional quals to be "pushed down" into the subquery to be used as WindowClause runConditions in order to short-circuit execution in nodeWindowAgg.c. The Node representation of runConditions wasn't well selected and because we do qual pushdown before planning the subquery, the planning of the subquery could perform subquery pull-up of nested subqueries. For WindowFuncs with args, the arguments could be changed after pushing the qual down to the subquery. This was made more difficult by the fact that the code duplicated the WindowFunc inside an OpExpr to include in the WindowClauses runCondition field. This could result in duplication of subqueries and a pull-up of such a subquery could result in another initplan parameter being issued for the 2nd version of the subplan. This could result in errors such as: ERROR: WindowFunc not found in subplan target lists To fix this, we change the node representation of these run conditions and instead of storing an OpExpr containing the WindowFunc in a list inside WindowClause, we now store a new node type named WindowFuncRunCondition within a new field in the WindowFunc. These get transformed into OpExprs later in planning once subquery pull-up has been performed. This problem did exist in v15 and v16, but that was fixed by 9d36b883b and e5d20bbd. Cat version bump due to new node type and modifying WindowFunc struct. Bug: #18305 Reported-by: Zuming Jiang Discussion: https://postgr.es/m/18305-33c49b4c830b37b3%40postgresql.org
* Add support for MERGE ... WHEN NOT MATCHED BY SOURCE.Dean Rasheed2024-03-30
| | | | | | | | | | | | | | | | | | | This allows MERGE commands to include WHEN NOT MATCHED BY SOURCE actions, which operate on rows that exist in the target relation, but not in the data source. These actions can execute UPDATE, DELETE, or DO NOTHING sub-commands. This is in contrast to already-supported WHEN NOT MATCHED actions, which operate on rows that exist in the data source, but not in the target relation. To make this distinction clearer, such actions may now be written as WHEN NOT MATCHED BY TARGET. Writing WHEN NOT MATCHED without specifying BY SOURCE or BY TARGET is equivalent to writing WHEN NOT MATCHED BY TARGET. Dean Rasheed, reviewed by Alvaro Herrera, Ted Yu and Vik Fearing. Discussion: https://postgr.es/m/CAEZATCWqnKGc57Y_JanUBHQXNKcXd7r=0R4NEZUVwP+syRkWbA@mail.gmail.com
* Postpone reparameterization of paths until create_plan().Tom Lane2024-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When considering nestloop paths for individual partitions within a partitionwise join, if the inner path is parameterized, it is parameterized by the topmost parent of the outer rel, not the corresponding outer rel itself. Therefore, we need to translate the parameterization so that the inner path is parameterized by the corresponding outer rel. Up to now, we did this while generating join paths. However, that's problematic because we must also translate some expressions that are shared across all paths for a relation, such as restriction clauses (kept in the RelOptInfo and/or IndexOptInfo) and TableSampleClauses (kept in the RangeTblEntry). The existing code fails to translate these at all, leading to wrong answers, odd failures such as "variable not found in subplan target list", or executor crashes. But we can't modify them during path generation, because that would break things if we end up choosing some non-partitioned-join path. So this patch postpones reparameterization of the inner path until createplan.c, where it is safe to modify the referenced RangeTblEntry, RelOptInfo or IndexOptInfo, because we have made a final choice of which Path to use. We do still have to check during path generation that the reparameterization will be possible. So we introduce a new function path_is_reparameterizable_by_child() to detect that. The duplication between path_is_reparameterizable_by_child() and reparameterize_path_by_child() is a bit annoying, but there seems no other good answer. A small benefit is that we can avoid building useless reparameterized trees in cases where a non-partitioned join is ultimately chosen. Also, reparameterize_path_by_child() can now be allowed to scribble on the input paths, saving a few cycles. This fix repairs the same problems previously addressed in the back branches by commits 62f120203 et al. Richard Guo, reviewed at various times by Ashutosh Bapat, Andrei Lepikhov, Alena Rybakina, Robert Haas, and myself Discussion: https://postgr.es/m/CAMbWs496+N=UAjOc=rcD3P7B6oJe4rZw08e_TZRUsWbPxZW3Tw@mail.gmail.com
* Improve NestLoopParam generation for lateral subqueriesDavid Rowley2024-01-26
| | | | | | | | | | | | | | | | | | | | | | | | | It was possible in cases where we had a LATERAL joined subquery that when the same Var is mentioned in both the lateral references and in the outer Vars of the scan clauses that the given Var wouldn't be assigned to the same NestLoopParam. This could cause issues in Memoize as the cache key would reference the Var for the scan clauses but when the parameter for the lateral references changed some code in Memoize would see that some other parameter had changed that's not part of the cache key and end up purging the entire cache as a result, thinking the cache had become stale. This could result in a Nested Loop -> Memoize plan being quite inefficient as, in the worst case, the cache purging could result in never getting a cache hit. In no cases could this problem lead to incorrect query results. Here we switch the order of operations so that we create NestLoopParam for the lateral references first before doing replace_nestloop_params(). replace_nestloop_params() will find and reuse the existing NestLoopParam in cases where the Var exists in both locations. Author: Richard Guo Reviewed-by: Tom Lane, David Rowley Discussion: https://postgr.es/m/CAMbWs48XHJEK1Q1CzAQ7L9sTANTs9W1cepXu8%3DKc0quUL%2Btg4Q%40mail.gmail.com
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Re-allow FDWs and custom scan providers to replace joins with pseudoconstant ↵Etsuro Fujita2023-08-15
| | | | | | | | | | | | | | | | | | | | | | | | | | quals. This was disabled in commit 6f80a8d9c due to the lack of support for handling of pseudoconstant quals assigned to replaced joins in createplan.c. To re-allow it, this patch adds the support by 1) modifying the ForeignPath and CustomPath structs so that if they represent foreign and custom scans replacing a join with a scan, they store the list of RestrictInfo nodes to apply to the join, as in JoinPaths, and by 2) modifying create_scan_plan() in createplan.c so that it uses that list in that case, instead of the baserestrictinfo list, to get pseudoconstant quals assigned to the join, as mentioned in the commit message for that commit. Important item for the release notes: this is non-backwards-compatible since it modifies the ForeignPath and CustomPath structs, as mentioned above, and changes the argument lists for FDW helper functions create_foreignscan_path(), create_foreign_join_path(), and create_foreign_upper_path(). Richard Guo, with some additional changes by me, reviewed by Nishant Sharma, Suraj Kharage, and Richard Guo. Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
* Allow plan nodes with initPlans to be considered parallel-safe.Tom Lane2023-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the plan itself is parallel-safe, and the initPlans are too, there's no reason anymore to prevent the plan from being marked parallel-safe. That restriction (dating to commit ab77a5a45) was really a special case of the fact that we couldn't transmit subplans to parallel workers at all. We fixed that in commit 5e6d8d2bb and follow-ons, but this case never got addressed. We still forbid attaching initPlans to a Gather node that's inserted pursuant to debug_parallel_query = regress. That's because, when we hide the Gather from EXPLAIN output, we'd hide the initPlans too, causing cosmetic regression diffs. It seems inadvisable to kluge EXPLAIN to the extent required to make the output look the same, so just don't do it in that case. Along the way, this also takes care of some sloppiness about updating path costs to match when we move initplans from one place to another during createplan.c and setrefs.c. Since all the planning decisions are already made by that point, this is just cosmetic; but it seems good to keep EXPLAIN output consistent with where the initplans are. The diff in query_planner() might be worth remarking on. I found that one because after fixing things to allow parallel-safe initplans, one partition_prune test case changed plans (as shown in the patch) --- but only when debug_parallel_query was active. The reason proved to be that we only bothered to mark Result nodes as potentially parallel-safe when debug_parallel_query is on. This neglects the fact that parallel-safety may be of interest for a sub-query even though the Result itself doesn't parallelize. Discussion: https://postgr.es/m/1129530.1681317832@sss.pgh.pa.us
* Remove redundant PARTITION BY columns from WindowClausesDavid Rowley2023-07-03
| | | | | | | | | | | | | | | | | | | Here we adjust the query planner to have it remove items from a window clause's PARTITION BY clause in cases where the pathkey for a column in the PARTITION BY clause is redundant. Doing this allows the optimization added in 9d9c02ccd to stop window aggregation early rather than going into "pass-through" mode to find tuples belonging to the next partition. Also, when we manage to remove all PARTITION BY columns, we now no longer needlessly check that the current tuple belongs to the same partition as the last tuple in nodeWindowAgg.c. If the pathkey was redundant then all tuples must contain the same value for the given redundant column, so there's no point in checking that during execution. Author: David Rowley Reviewed-by: Richard Guo Discussion: https://postgr.es/m/CAApHDvo2ji+hdxrxfXtRtsfSVw3to2o1nCO20qimw0dUGK8hcQ@mail.gmail.com
* Revert "Move PartitionPruneInfo out of plan nodes into PlannedStmt"Alvaro Herrera2023-05-04
| | | | | | | | | | | | | This reverts commit ec386948948c and its fixup 589bb816499e. This change was intended to support query planning avoiding acquisition of locks on partitions that were going to be pruned; however, the overall project took a different direction at [1] and this bit is no longer needed. Put things back the way they were as agreed in [2], to avoid unnecessary complexity. Discussion: [1] https://postgr.es/m/4191508.1674157166@sss.pgh.pa.us Discussion: [2] https://postgr.es/m/20230502175409.kcoirxczpdha26wt@alvherre.pgsql
* Enable use of Memoize atop an Append that came from UNION ALL.Tom Lane2023-03-16
| | | | | | | | | | | | | | | | | create_append_path() would only apply get_baserel_parampathinfo when the path is for a partitioned table, but it's also potentially useful for paths for UNION ALL appendrels. Specifically, that supports building a Memoize path atop this one. While we're in the vicinity, delete some dead code in create_merge_append_plan(): there's no need for it to support parameterized MergeAppend paths, and it doesn't look like that is going to change anytime soon. It'll be easy enough to undo this when/if it becomes useful. Richard Guo Discussion: https://postgr.es/m/CAMbWs4_ABSu4PWG2rE1q10tJugEXHWgru3U8dAgkoFvgrb6aEA@mail.gmail.com
* Remove gratuitous assumptions about what make_modifytable can see.Tom Lane2023-02-20
| | | | | | | | | | | | | | | For no clearly good reason, make_modifytable assumed that it could not reach its get-the-FDW-info-the-hard-way path in MERGE. It's currently possible to demonstrate that assertion failing, which seems to be due to an upstream planner bug; but there's no good reason to do it like this at all. Let's apply the principle of separation of concerns and make the MERGE check separately, after getting or not getting the fdwroutine pointer. Per report from Alexander Lakhin. No test case, since I think the potential test condition will go away soon. Discussion: https://postgr.es/m/36bee393-b351-16ac-93b2-d46d83637e45@gmail.com
* Remove dead NoMovementScanDirection codeDavid Rowley2023-02-01
| | | | | | | | | | | | | | | | | | | | | | Here remove some dead code from heapgettup() and heapgettup_pagemode() which was trying to support NoMovementScanDirection scans. This code can never be reached as standard_ExecutorRun() never calls ExecutePlan with NoMovementScanDirection. Additionally, plans which were scanning an unordered index would use NoMovementScanDirection rather than ForwardScanDirection. There was no real need for this, so here we adjust this so we use ForwardScanDirection for unordered index scans. A comment in pathnodes.h claimed that NoMovementScanDirection was used for PathKey reasons, but if that was true, it no longer is, per code in build_index_paths(). This does change the non-text format of the EXPLAIN output so that unordered index scans now have a "Forward" scan direction rather than "NoMovement". The text format of EXPLAIN has not changed. Author: Melanie Plageman Reviewed-by: Tom Lane, David Rowley Discussion: https://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com
* Invent "join domains" to replace the below_outer_join hack.Tom Lane2023-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | EquivalenceClasses are now understood as applying within a "join domain", which is a set of inner-joined relations (possibly underneath an outer join). We no longer need to treat an EC from below an outer join as a second-class citizen. I have hopes of eventually being able to treat outer-join clauses via EquivalenceClasses, by means of only applying deductions within the EC's join domain. There are still problems in the way of that, though, so for now the reconsider_outer_join_clause logic is still here. I haven't been able to get rid of RestrictInfo.is_pushed_down either, but I wonder if that could be recast using JoinDomains. I had to hack one test case in postgres_fdw.sql to make it still test what it was meant to, because postgres_fdw is inconsistent about how it deals with quals containing non-shippable expressions; see https://postgr.es/m/1691374.1671659838@sss.pgh.pa.us. That should be improved, but I don't think it's within the scope of this patch series. Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Make Vars be outer-join-aware.Tom Lane2023-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Traditionally we used the same Var struct to represent the value of a table column everywhere in parse and plan trees. This choice predates our support for SQL outer joins, and it's really a pretty bad idea with outer joins, because the Var's value can depend on where it is in the tree: it might go to NULL above an outer join. So expression nodes that are equal() per equalfuncs.c might not represent the same value, which is a huge correctness hazard for the planner. To improve this, decorate Var nodes with a bitmapset showing which outer joins (identified by RTE indexes) may have nulled them at the point in the parse tree where the Var appears. This allows us to trust that equal() Vars represent the same value. A certain amount of klugery is still needed to cope with cases where we re-order two outer joins, but it's possible to make it work without sacrificing that core principle. PlaceHolderVars receive similar decoration for the same reason. In the planner, we include these outer join bitmapsets into the relids that an expression is considered to depend on, and in consequence also add outer-join relids to the relids of join RelOptInfos. This allows us to correctly perceive whether an expression can be calculated above or below a particular outer join. This change affects FDWs that want to plan foreign joins. They *must* follow suit when labeling foreign joins in order to match with the core planner, but for many purposes (if postgres_fdw is any guide) they'd prefer to consider only base relations within the join. To support both requirements, redefine ForeignScan.fs_relids as base+OJ relids, and add a new field fs_base_relids that's set up by the core planner. Large though it is, this commit just does the minimum necessary to install the new mechanisms and get check-world passing again. Follow-up patches will perform some cleanup. (The README additions and comments mention some stuff that will appear in the follow-up.) Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Remove redundant grouping and DISTINCT columns.Tom Lane2023-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid explicitly grouping by columns that we know are redundant for sorting, for example we need group by only one of x and y in SELECT ... WHERE x = y GROUP BY x, y This comes up more often than you might think, as shown by the changes in the regression tests. It's nearly free to detect too, since we are just piggybacking on the existing logic that detects redundant pathkeys. (In some of the existing plans that change, it's visible that a sort step preceding the grouping step already didn't bother to sort by the redundant column, making the old plan a bit silly-looking.) To do this, build processed_groupClause and processed_distinctClause lists that omit any provably-redundant sort items, and consult those not the originals where relevant. This means that within the planner, one should usually consult root->processed_groupClause or root->processed_distinctClause if one wants to know which columns are to be grouped on; but to check whether grouping or distinct-ing is happening at all, check non-NIL-ness of parse->groupClause or parse->distinctClause. This is comparable to longstanding rules about handling the HAVING clause, so I don't think it'll be a huge maintenance problem. nodeAgg.c also needs minor mods, because it's now possible to generate AGG_PLAIN and AGG_SORTED Agg nodes with zero grouping columns. Patch by me; thanks to Richard Guo and David Rowley for review. Discussion: https://postgr.es/m/185315.1672179489@sss.pgh.pa.us
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Move PartitioPruneInfo out of plan nodes into PlannedStmtAlvaro Herrera2022-12-01
| | | | | | | | | | | | | | | | | | | | The planner will now add a given PartitioPruneInfo to PlannedStmt.partPruneInfos instead of directly to the Append/MergeAppend plan node. What gets set instead in the latter is an index field which points to the list element of PlannedStmt.partPruneInfos containing the PartitioPruneInfo belonging to the plan node. A later commit will make AcquireExecutorLocks() do the initial partition pruning to determine a minimal set of partitions to be locked when validating a plan tree and it will need to consult the PartitioPruneInfos referenced therein to do so. It would be better for the PartitioPruneInfos to be accessible directly than requiring a walk of the plan tree to find them, which is easier when it can be done by simply iterating over PlannedStmt.partPruneInfos. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
* Stop accessing checkAsUser via RTE in some casesAlvaro Herrera2022-11-30
| | | | | | | | | | | | | | | | | | | | A future commit will move the checkAsUser field from RangeTblEntry to a new node that, unlike RTEs, will only be created for tables mentioned in the query but not for the inheritance child relations added to the query by the planner. So, checkAsUser value for a given child relation will have to be obtained by referring to that for its ancestor mentioned in the query. In preparation, it seems better to expand the use of RelOptInfo.userid during planning in place of rte->checkAsUser so that there will be fewer places to adjust for the above change. Given that the child-to-ancestor mapping is not available during the execution of a given "child" ForeignScan node, add a checkAsUser field to ForeignScan to carry the child relation's RelOptInfo.userid. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGFCs2uq7VRKi7g+FFKbP6Ea_2_HkgZb2HPhUfaAKT3ng@mail.gmail.com
* Disallow MERGE cleanly for foreign partitionsAlvaro Herrera2022-10-15
| | | | | | | | | | | | While directly targetting a foreign table with MERGE was already expressly forbidden, we failed to catch the case of a partitioned table that has a foreign table as a partition; and the result if you try is an incomprehensible error. Fix that by adding a specific check. Backpatch to 15. Reported-by: Tatsuhiro Nakamori <bt22nakamorit@oss.nttdata.com> Discussion: https://postgr.es/m/bt22nakamorit@oss.nttdata.com
* Harmonize more parameter names in bulk.Peter Geoghegan2022-09-20
| | | | | | | | | | | | | | | | Make sure that function declarations use names that exactly match the corresponding names from function definitions in optimizer, parser, utility, libpq, and "commands" code, as well as in remaining library code. Do the same for all code related to frontend programs (with the exception of pg_dump/pg_dumpall related code). Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Later commits will handle ecpg and pg_dump/pg_dumpall. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
* Use an explicit state flag to control PlaceHolderInfo creation.Tom Lane2022-08-17
| | | | | | | | | | | | | | | | | | | Up to now, callers of find_placeholder_info() were required to pass a flag indicating if it's OK to make a new PlaceHolderInfo. That'd be fine if the callers had free choice, but they do not. Once we begin deconstruct_jointree() it's no longer OK to make more PHIs; while callers before that always want to create a PHI if it's not there already. So there's no freedom of action, only the opportunity to cause bugs by creating PHIs too late. Let's get rid of that in favor of adding a state flag PlannerInfo.placeholdersFrozen, which we can set at the point where it's no longer OK to make more PHIs. This patch also simplifies a couple of call sites that were using complicated logic to avoid calling find_placeholder_info() as much as possible. Now that that lookup is O(1) thanks to the previous commit, the extra bitmap manipulations are probably a net negative. Discussion: https://postgr.es/m/1405792.1660677844@sss.pgh.pa.us
* Avoid using list_length() to test for empty list.Tom Lane2022-08-17
| | | | | | | | | | | | | | | | | | | | | | | | The standard way to check for list emptiness is to compare the List pointer to NIL; our list code goes out of its way to ensure that that is the only representation of an empty list. (An acceptable alternative is a plain boolean test for non-null pointer, but explicit mention of NIL is usually preferable.) Various places didn't get that memo and expressed the condition with list_length(), which might not be so bad except that there were such a variety of ways to check it exactly: equal to zero, less than or equal to zero, less than one, yadda yadda. In the name of code readability, let's standardize all those spellings as "list == NIL" or "list != NIL". (There's probably some microscopic efficiency gain too, though few of these look to be at all performance-critical.) A very small number of cases were left as-is because they seemed more consistent with other adjacent list_length tests that way. Peter Smith, with bikeshedding from a number of us Discussion: https://postgr.es/m/CAHut+PtQYe+ENX5KrONMfugf0q6NHg4hR5dAhqEXEc2eefFeig@mail.gmail.com
* Use list_copy_head() instead of list_truncate(list_copy(...), ...)David Rowley2022-07-13
| | | | | | | | | | | | | | | | Truncating off the end of a freshly copied List is not a very efficient way of copying the first N elements of a List. In many of the cases that are updated here, the pattern was only being used to remove the final element of a List. That's about the best case for it, but there were many instances where the truncate trimming the List down much further. 4cc832f94 added list_copy_head(), so let's use it in cases where it's useful. Author: David Rowley Discussion: https://postgr.es/m/1986787.1657666922%40sss.pgh.pa.us
* Avoid overflow hazard when clamping group counts to "long int".Tom Lane2022-05-21
| | | | | | | | | | | | | | | | | | | | | | | | Several places in the planner tried to clamp a double value to fit in a "long" by doing (long) Min(x, (double) LONG_MAX); This is subtly incorrect, because it casts LONG_MAX to double and potentially back again. If long is 64 bits then the double value is inexact, and the platform might round it up to LONG_MAX+1 resulting in an overflow and an undesirably negative output. While it's not hard to rewrite the expression into a safe form, let's put it into a common function to reduce the risk of someone doing it wrong in future. In principle this is a bug fix, but since the problem could only manifest with group count estimates exceeding 2^63, it seems unlikely that anyone has actually hit this or will do so anytime soon. We're fixing it mainly to satisfy fuzzer-type tools. That being the case, a HEAD-only fix seems sufficient. Andrey Lepikhov Discussion: https://postgr.es/m/ebbc2efb-7ef9-bf2f-1ada-d6ec48f70e58@postgrespro.ru
* Pre-beta mechanical code beautification.Tom Lane2022-05-12
| | | | | Run pgindent, pgperltidy, and reformat-dat-files. I manually fixed a couple of comments that pgindent uglified.
* Disable asynchronous execution if using gating Result nodes.Etsuro Fujita2022-04-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | mark_async_capable_plan(), which is called from create_append_plan() to determine whether subplans are async-capable, failed to take into account that the given subplan created from a given subpath might include a gating Result node if the subpath is a SubqueryScanPath or ForeignPath, causing a segmentation fault there when the subplan created from a SubqueryScanPath includes the Result node, or causing ExecAsyncRequest() to throw an error about an unrecognized node type when the subplan created from a ForeignPath includes the Result node, because in the latter case the Result node was unintentionally considered as async-capable, but we don't currently support executing Result nodes asynchronously. Fix by modifying mark_async_capable_plan() to disable asynchronous execution in such cases. Also, adjust code in the ProjectionPath case in mark_async_capable_plan(), for consistency with other cases, and adjust/improve comments there. is_async_capable_path() added in commit 27e1f1456, which was rewritten to mark_async_capable_plan() in a later commit, has the same issue, causing the error at execution mentioned above, so back-patch to v14 where the aforesaid commit went in. Per report from Justin Pryzby. Etsuro Fujita, reviewed by Zhihong Yu and Justin Pryzby. Discussion: https://postgr.es/m/20220408124338.GK24419%40telsasoft.com
* Remove inadequate assertion check in CTE inlining.Tom Lane2022-04-21
| | | | | | | | | | | | | | | | | | | | inline_cte() expected to find exactly as many references to the target CTE as its cterefcount indicates. While that should be accurate for the tree as emitted by the parser, there are some optimizations that occur upstream of here that could falsify it, notably removal of unused subquery output expressions. Trying to make the accounting 100% accurate seems expensive and doomed to future breakage. It's not really worth it, because all this code is protecting is downstream assumptions that every referenced CTE has a plan. Let's convert those assertions to regular test-and-elog just in case there's some actual problem, and then drop the failing assertion. Per report from Tomas Vondra (thanks also to Richard Guo for analysis). Back-patch to v12 where the faulty code came in. Discussion: https://postgr.es/m/29196a1e-ed47-c7ca-9be2-b1c636816183@enterprisedb.com
* Teach planner and executor about monotonic window funcsDavid Rowley2022-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
* Allow asynchronous execution in more cases.Etsuro Fujita2022-04-06
| | | | | | | | | | | | | | | | In commit 27e1f1456, create_append_plan() only allowed the subplan created from a given subpath to be executed asynchronously when it was an async-capable ForeignPath. To extend coverage, this patch handles cases when the given subpath includes some other Path types as well that can be omitted in the plan processing, such as a ProjectionPath directly atop an async-capable ForeignPath, allowing asynchronous execution in partitioned-scan/partitioned-join queries with non-Var tlist expressions and more UNION queries. Andrey Lepikhov and Etsuro Fujita, reviewed by Alexander Pyhalov and Zhihong Yu. Discussion: https://postgr.es/m/659c37a8-3e71-0ff2-394c-f04428c76f08%40postgrespro.ru
* Add support for MERGE SQL commandAlvaro Herrera2022-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise require multiple PL statements. For example, MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular tables, partitioned tables and inheritance hierarchies, including column and row security enforcement, as well as support for row and statement triggers and transition tables therein. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used from PL/pgSQL. MERGE does not support targetting updatable views or foreign tables, and RETURNING clauses are not allowed either. These limitations are likely fixable with sufficient effort. Rewrite rules are also not supported, but it's not clear that we'd want to support them. Author: Pavan Deolasee <pavan.deolasee@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Amit Langote <amitlangote09@gmail.com> Author: Simon Riggs <simon.riggs@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions) Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions) Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
* Don't use_physical_tlist for an IOS with non-returnable columns.Tom Lane2022-02-11
| | | | | | | | | | | | | | | | | createplan.c tries to save a runtime projection step by specifying a scan plan node's output as being exactly the table's columns, or index's columns in the case of an index-only scan, if there is not a reason to do otherwise. This logic did not previously pay attention to whether an index's columns are returnable. That worked, sort of accidentally, until commit 9a3ddeb51 taught setrefs.c to reject plans that try to read a non-returnable column. I have no desire to loosen setrefs.c's new check, so instead adjust use_physical_tlist() to not try to optimize this way when there are non-returnable column(s). Per report from Ryan Kelly. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/CAHUie24ddN+pDNw7fkhNrjrwAX=fXXfGZZEHhRuofV_N_ftaSg@mail.gmail.com