aboutsummaryrefslogtreecommitdiff
path: root/src/backend/optimizer/plan/setrefs.c
Commit message (Collapse)AuthorAge
* In ExecInitModifyTable, don't scribble on the source plan.Tom Lane2025-05-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code carelessly modified mtstate->ps.plan->targetlist, which it's not supposed to do. Fortunately, there's not really any need to do that because the planner already set up a perfectly acceptable targetlist for the plan node. We just need to remove the erroneous assignments and update some relevant comments. As it happens, the erroneous assignments caused the targetlist to point to a different part of the source plan tree, so that there isn't really a risk of the pointer becoming dangling after executor termination. The only visible effect of this change we can find is that EXPLAIN will show upper references to the ModifyTable's output expressions using different variables. Formerly it showed Vars from the first target relation that survived executor-startup pruning. Now it always shows such references using the first relation appearing in the planner output, independently of what happens during executor pruning. On the whole that seems like a good thing. Also make a small tweak in ExplainPreScanNode to ensure that the first relation will receive a refname assignment in set_rtable_names, even if it got pruned at startup. Previously the Vars might be shown without any table qualification, which is confusing in a multi-table query. I considered back-patching this, but since the bug doesn't seem to have any really terrible consequences in existing branches, it seems better to not change their EXPLAIN output. It's not too late for v18 though, especially since v18 already made other changes in the EXPLAIN output for these cases. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Author: Andres Freund <andres@anarazel.de> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/213261.1747611093@sss.pgh.pa.us
* Revert "Don't lock partitions pruned by initial pruning"Amit Langote2025-05-22
| | | | | | | | | | | | | | | | As pointed out by Tom Lane, the patch introduced fragile and invasive design around plan invalidation handling when locking of prunable partitions was deferred from plancache.c to the executor. In particular, it violated assumptions about CachedPlan immutability and altered executor APIs in ways that are difficult to justify given the added complexity and overhead. This also removes the firstResultRels field added to PlannedStmt in commit 28317de72, which was intended to support deferred locking of certain ModifyTable result relations. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/605328.1747710381@sss.pgh.pa.us
* Ensure first ModifyTable rel initialized if all are prunedAmit Langote2025-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit cbc127917e introduced tracking of unpruned relids to avoid processing pruned relations, and changed ExecInitModifyTable() to initialize only unpruned result relations. As a result, MERGE statements that prune all target partitions can now lead to crashes or incorrect behavior during execution. The crash occurs because some executor code paths rely on ModifyTableState.resultRelInfo[0] being present and initialized, even when no result relations remain after pruning. For example, ExecMerge() and ExecMergeNotMatched() use the first resultRelInfo to determine the appropriate action. Similarly, ExecInitPartitionInfo() assumes that at least one result relation exists. To preserve these assumptions, ExecInitModifyTable() now includes the first result relation in the initialized result relation list if all result relations for that ModifyTable were pruned. To enable that, ExecDoInitialPruning() ensures the first relation is locked if it was pruned and locking is necessary. To support this exception to the pruning logic, PlannedStmt now includes a list of RT indexes identifying the first result relation of each ModifyTable node in the plan. This allows ExecDoInitialPruning() to check whether each such relation was pruned and, if so, lock it if necessary. Bug: #18830 Reported-by: Robins Tharakan <tharakan@gmail.com> Diagnozed-by: Tender Wang <tndrwang@gmail.com> Diagnozed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Co-authored-by: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Tender Wang <tndrwang@gmail.com> Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com> Discussion: https://postgr.es/m/18830-1f31ea1dc930d444%40postgresql.org
* Track unpruned relids to avoid processing pruned relationsAmit Langote2025-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit introduces changes to track unpruned relations explicitly, making it possible for top-level plan nodes, such as ModifyTable and LockRows, to avoid processing partitions pruned during initial pruning. Scan-level nodes, such as Append and MergeAppend, already avoid the unnecessary processing by accessing partition pruning results directly via part_prune_index. In contrast, top-level nodes cannot access pruning results directly and need to determine which partitions remain unpruned. To address this, this commit introduces a new bitmapset field, es_unpruned_relids, which the executor uses to track the set of unpruned relations. This field is referenced during plan initialization to skip initializing certain nodes for pruned partitions. It is initialized with PlannedStmt.unprunableRelids, a new field that the planner populates with RT indexes of relations that cannot be pruned during runtime pruning. These include relations not subject to partition pruning and those required for execution regardless of pruning. PlannedStmt.unprunableRelids is computed during set_plan_refs() by removing the RT indexes of runtime-prunable relations, identified from PartitionPruneInfos, from the full set of relation RT indexes. ExecDoInitialPruning() then updates es_unpruned_relids by adding partitions that survive initial pruning. To support this, PartitionedRelPruneInfo and PartitionedRelPruningData now include a leafpart_rti_map[] array that maps partition indexes to their corresponding RT indexes. The former is used in set_plan_refs() when constructing unprunableRelids, while the latter is used in ExecDoInitialPruning() to convert partition indexes returned by get_matching_partitions() into RT indexes, which are then added to es_unpruned_relids. These changes make it possible for ModifyTable and LockRows nodes to process only relations that remain unpruned after initial pruning. ExecInitModifyTable() trims lists, such as resultRelations, withCheckOptionLists, returningLists, and updateColnosLists, to consider only unpruned partitions. It also creates ResultRelInfo structs only for these partitions. Similarly, child RowMarks for pruned relations are skipped. By avoiding unnecessary initialization of structures for pruned partitions, these changes improve the performance of updates and deletes on partitioned tables during initial runtime pruning. Due to ExecInitModifyTable() changes as described above, EXPLAIN on a plan for UPDATE and DELETE that uses runtime initial pruning no longer lists partitions pruned during initial pruning. Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
* Move PartitionPruneInfo out of plan nodes into PlannedStmtAmit Langote2025-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This moves PartitionPruneInfo from plan nodes to PlannedStmt, simplifying traversal by centralizing all PartitionPruneInfo structures in a single list in it, which holds all instances for the main query and its subqueries. Instead of plan nodes (Append or MergeAppend) storing PartitionPruneInfo pointers, they now reference an index in this list. A bitmapset field is added to PartitionPruneInfo to store the RT indexes corresponding to the apprelids field in Append or MergeAppend. This allows execution pruning logic to verify that it operates on the correct plan node, mainly to facilitate debugging. Duplicated code in set_append_references() and set_mergeappend_references() is refactored into a new function, register_pruneinfo(). This updates RT indexes by applying rtoffet and adds PartitionPruneInfo to the global list in PlannerGlobal. By allowing pruning to be performed without traversing the plan tree, this change lays the groundwork for runtime initial pruning to occur independently of plan tree initialization. Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> (earlier version) Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Tomas Vondra <tomas@vondra.me> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
* Fix setrefs.c's failure to do expression processing on prune steps.Tom Lane2025-01-16
| | | | | | | | | | | | | | | | | We should run the expression subtrees of PartitionedRelPruneInfo structs through fix_scan_expr. Failure to do so means that AlternativeSubPlans within those expressions won't be cleaned up properly, resulting in "unrecognized node type" errors since v14. It seems fairly likely that at least some of the other steps done by fix_scan_expr are important here as well, resulting in as-yet- undetected bugs. Therefore, I've chosen to back-patch this to all supported branches including v13, even though the known symptom doesn't manifest in v13. Per bug #18778 from Alexander Lakhin. Discussion: https://postgr.es/m/18778-24cd399df6c806af@postgresql.org
* Add OLD/NEW support to RETURNING in DML queries.Dean Rasheed2025-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows the RETURNING list of INSERT/UPDATE/DELETE/MERGE queries to explicitly return old and new values by using the special aliases "old" and "new", which are automatically added to the query (if not already defined) while parsing its RETURNING list, allowing things like: RETURNING old.colname, new.colname, ... RETURNING old.*, new.* Additionally, a new syntax is supported, allowing the names "old" and "new" to be changed to user-supplied alias names, e.g.: RETURNING WITH (OLD AS o, NEW AS n) o.colname, n.colname, ... This is useful when the names "old" and "new" are already defined, such as inside trigger functions, allowing backwards compatibility to be maintained -- the interpretation of any existing queries that happen to already refer to relations called "old" or "new", or use those as aliases for other relations, is not changed. For an INSERT, old values will generally be NULL, and for a DELETE, new values will generally be NULL, but that may change for an INSERT with an ON CONFLICT ... DO UPDATE clause, or if a query rewrite rule changes the command type. Therefore, we put no restrictions on the use of old and new in any DML queries. Dean Rasheed, reviewed by Jian He and Jeff Davis. Discussion: https://postgr.es/m/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com
* Update copyright for 2025Bruce Momjian2025-01-01
| | | | Backpatch-through: 13
* Remove useless casts to (void *)Peter Eisentraut2024-11-28
| | | | | | | | Many of them just seem to have been copied around for no real reason. Their presence causes (small) risks of hiding actual type mismatches or silently discarding qualifiers Discussion: https://www.postgresql.org/message-id/flat/461ea37c-8b58-43b4-9736-52884e862820@eisentraut.org
* Mark expressions nullable by grouping setsRichard Guo2024-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When generating window_pathkeys, distinct_pathkeys, or sort_pathkeys, we failed to realize that the grouping/ordering expressions might be nullable by grouping sets. As a result, we may incorrectly deem that the PathKeys are redundant by EquivalenceClass processing and thus remove them from the pathkeys list. That would lead to wrong results in some cases. To fix this issue, we mark the grouping expressions nullable by grouping sets if that is the case. If the grouping expression is a Var or PlaceHolderVar or constructed from those, we can just add the RT index of the RTE_GROUP RTE to the existing nullingrels field(s); otherwise we have to add a PlaceHolderVar to carry on the nullingrel bit. However, we have to manually remove this nullingrel bit from expressions in various cases where these expressions are logically below the grouping step, such as when we generate groupClause pathkeys for grouping sets, or when we generate PathTarget for initial input to grouping nodes. Furthermore, in set_upper_references, the targetlist and quals of an Agg node should have nullingrels that include the effects of the grouping step, ie they will have nullingrels equal to the input Vars/PHVs' nullingrels plus the nullingrel bit that references the grouping RTE. In order to perform exact nullingrels matches, we also need to manually remove this nullingrel bit. Bump catversion because this changes the querytree produced by the parser. Thanks to Tom Lane for the idea to invent a new kind of RTE. Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from various threads. Author: Richard Guo Reviewed-by: Ashutosh Bapat, Sutou Kouhei Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
* Introduce an RTE for the grouping stepRichard Guo2024-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there are subqueries in the grouping expressions, each of these subqueries in the targetlist and HAVING clause is expanded into distinct SubPlan nodes. As a result, only one of these SubPlan nodes would be converted to reference to the grouping key column output by the Agg node; others would have to get evaluated afresh. This is not efficient, and with grouping sets this can cause wrong results issues in cases where they should go to NULL because they are from the wrong grouping set. Furthermore, during re-evaluation, these SubPlan nodes might use nulled column values from grouping sets, which is not correct. This issue is not limited to subqueries. For other types of expressions that are part of grouping items, if they are transformed into another form during preprocessing, they may fail to match lower target items. This can also lead to wrong results with grouping sets. To fix this issue, we introduce a new kind of RTE representing the output of the grouping step, with columns that are the Vars or expressions being grouped on. In the parser, we replace the grouping expressions in the targetlist and HAVING clause with Vars referencing this new RTE, so that the output of the parser directly expresses the semantic requirement that the grouping expressions be gotten from the grouping output rather than computed some other way. In the planner, we first preprocess all the columns of this new RTE and then replace any Vars in the targetlist and HAVING clause that reference this new RTE with the underlying grouping expressions, so that we will have only one instance of a SubPlan node for each subquery contained in the grouping expressions. Bump catversion because this changes the querytree produced by the parser. Thanks to Tom Lane for the idea to invent a new kind of RTE. Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from various threads. Author: Richard Guo Reviewed-by: Ashutosh Bapat, Sutou Kouhei Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
* Fix comment about cross-checking the varnullingrelsRichard Guo2024-06-10
| | | | | | | | | The nullingrels match checks are not limited to debugging builds. Oversight in commit 867be9c07. Author: Richard Guo Reviewed-by: Alvaro Herrera, Tom Lane, Robert Haas Discussion: https://postgr.es/m/CAMbWs4_SDsdYD7DdQw7RXc3jv3axbg+RGZ7aSi9GaqX=F8hNVw@mail.gmail.com
* Add support for MERGE ... WHEN NOT MATCHED BY SOURCE.Dean Rasheed2024-03-30
| | | | | | | | | | | | | | | | | | | This allows MERGE commands to include WHEN NOT MATCHED BY SOURCE actions, which operate on rows that exist in the target relation, but not in the data source. These actions can execute UPDATE, DELETE, or DO NOTHING sub-commands. This is in contrast to already-supported WHEN NOT MATCHED actions, which operate on rows that exist in the data source, but not in the target relation. To make this distinction clearer, such actions may now be written as WHEN NOT MATCHED BY TARGET. Writing WHEN NOT MATCHED without specifying BY SOURCE or BY TARGET is equivalent to writing WHEN NOT MATCHED BY TARGET. Dean Rasheed, reviewed by Alvaro Herrera, Ted Yu and Vik Fearing. Discussion: https://postgr.es/m/CAEZATCWqnKGc57Y_JanUBHQXNKcXd7r=0R4NEZUVwP+syRkWbA@mail.gmail.com
* Remove unused #include's from backend .c filesPeter Eisentraut2024-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | as determined by include-what-you-use (IWYU) While IWYU also suggests to *add* a bunch of #include's (which is its main purpose), this patch does not do that. In some cases, a more specific #include replaces another less specific one. Some manual adjustments of the automatic result: - IWYU currently doesn't know about includes that provide global variable declarations (like -Wmissing-variable-declarations), so those includes are being kept manually. - All includes for port(ability) headers are being kept for now, to play it safe. - No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the patch from exploding in size. Note that this patch touches just *.c files, so nothing declared in header files changes in hidden ways. As a small example, in src/backend/access/transam/rmgr.c, some IWYU pragma annotations are added to handle a special case there. Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Fix comment about ressortgrouprefs being unique in setop plans.Heikki Linnakangas2023-11-28
| | | | | Author: Richard Guo, Tom Lane Discussion: https://www.postgresql.org/message-id/CAMbWs49rAfFS-yd7=QxtDUrZDFfRBGy4rGBJNyGDH7=CLipFPg@mail.gmail.com
* Add trailing commas to enum definitionsPeter Eisentraut2023-10-26
| | | | | | | | | | | | | | | | | | | | Since C99, there can be a trailing comma after the last value in an enum definition. A lot of new code has been introducing this style on the fly. Some new patches are now taking an inconsistent approach to this. Some add the last comma on the fly if they add a new last value, some are trying to preserve the existing style in each place, some are even dropping the last comma if there was one. We could nudge this all in a consistent direction if we just add the trailing commas everywhere once. I omitted a few places where there was a fixed "last" value that will always stay last. I also skipped the header files of libpq and ecpg, in case people want to use those with older compilers. There were also a small number of cases where the enum type wasn't used anywhere (but the enum values were), which ended up confusing pgindent a bit, so I left those alone. Discussion: https://www.postgresql.org/message-id/flat/386f8c45-c8ac-4681-8add-e3b0852c1620%40eisentraut.org
* Collect dependency information for parsed CallStmts.Tom Lane2023-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parse analysis of a CallStmt will inject mutable information, for instance the OID of the called procedure, so that subsequent DDL may create a need to re-parse the CALL. We failed to detect this for CALLs in plpgsql routines, because no dependency information was collected when putting a CallStmt into the plan cache. That could lead to misbehavior or strange errors such as "cache lookup failed". Before commit ee895a655, the issue would only manifest for CALLs appearing in atomic contexts, because we re-planned non-atomic CALLs every time through anyway. It is now apparent that extract_query_dependencies() probably needs a special case for every utility statement type for which stmt_requires_parse_analysis() returns true. I wanted to add something like Assert(!stmt_requires_parse_analysis(...)) when falling out of extract_query_dependencies_walker without doing anything, but there are API issues as well as a more fundamental point: stmt_requires_parse_analysis is supposed to be applied to raw parser output, so it'd be cheating to assume it will give the correct answer for post-parse-analysis trees. I contented myself with adding a comment. Per bug #18131 from Christian Stork. Back-patch to all supported branches. Discussion: https://postgr.es/m/18131-576854e79c5cd264@postgresql.org
* Fix incorrect logic in plan dependency recordingDavid Rowley2023-09-14
| | | | | | | | | | | | | | | Both 50e17ad28 and 29f45e299 mistakenly tried to record a plan dependency on a function but mistakenly inverted the OidIsValid test. This meant that we'd record a dependency only when the function's Oid was InvalidOid. Clearly this was meant to *not* record the dependency in that case. 50e17ad28 made this mistake first, then in v15 29f45e299 copied the same mistake. Reported-by: Tom Lane Backpatch-through: 14, where 50e17ad28 first made this mistake Discussion: https://postgr.es/m/2277537.1694301772@sss.pgh.pa.us
* Allow plan nodes with initPlans to be considered parallel-safe.Tom Lane2023-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the plan itself is parallel-safe, and the initPlans are too, there's no reason anymore to prevent the plan from being marked parallel-safe. That restriction (dating to commit ab77a5a45) was really a special case of the fact that we couldn't transmit subplans to parallel workers at all. We fixed that in commit 5e6d8d2bb and follow-ons, but this case never got addressed. We still forbid attaching initPlans to a Gather node that's inserted pursuant to debug_parallel_query = regress. That's because, when we hide the Gather from EXPLAIN output, we'd hide the initPlans too, causing cosmetic regression diffs. It seems inadvisable to kluge EXPLAIN to the extent required to make the output look the same, so just don't do it in that case. Along the way, this also takes care of some sloppiness about updating path costs to match when we move initplans from one place to another during createplan.c and setrefs.c. Since all the planning decisions are already made by that point, this is just cosmetic; but it seems good to keep EXPLAIN output consistent with where the initplans are. The diff in query_planner() might be worth remarking on. I found that one because after fixing things to allow parallel-safe initplans, one partition_prune test case changed plans (as shown in the patch) --- but only when debug_parallel_query was active. The reason proved to be that we only bothered to mark Result nodes as potentially parallel-safe when debug_parallel_query is on. This neglects the fact that parallel-safety may be of interest for a sub-query even though the Result itself doesn't parallelize. Discussion: https://postgr.es/m/1129530.1681317832@sss.pgh.pa.us
* Account for optimized MinMax aggregates during SS_finalize_plan.Tom Lane2023-07-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are capable of optimizing MIN() and MAX() aggregates on indexed columns into subqueries that exploit the index, rather than the normal thing of scanning the whole table. When we do this, we replace the Aggref node(s) with Params referencing subquery outputs. Such Params really ought to be included in the per-plan-node extParam/allParam sets computed by SS_finalize_plan. However, we've never done so up to now because of an ancient implementation choice to perform that substitution during set_plan_references, which runs after SS_finalize_plan, so that SS_finalize_plan never sees these Params. This seems like clearly a bug, yet there have been no field reports of problems that could trace to it. This may be because the types of Plan nodes that could contain Aggrefs do not have any of the rescan optimizations that are controlled by extParam/allParam. Nonetheless it seems certain to bite us someday, so let's fix it in a self-contained patch that can be back-patched if we find a case in which there's a live bug pre-v17. The cleanest fix would be to perform a separate tree walk to do these substitutions before SS_finalize_plan runs. That seems unattractive, first because a whole-tree mutation pass is expensive, and second because we lack infrastructure for visiting expression subtrees in a Plan tree, so that we'd need a new function knowing as much as SS_finalize_plan knows about that. I also considered swapping the order of SS_finalize_plan and set_plan_references, but that fell foul of various assumptions that seem tricky to fix. So the approach adopted here is to teach SS_finalize_plan itself to check for such Aggrefs. I refactored things a bit in setrefs.c to avoid having three copies of the code that does that. Given the lack of any currently-known bug, no test case here. Discussion: https://postgr.es/m/2391880.1689025003@sss.pgh.pa.us
* Centralize fixups for mismatched nullingrels in nestloop params.Tom Lane2023-06-20
| | | | | | | | | | | | | | | | | | | | It turns out that the fixes we applied in commits bfd332b3f and 63e4f13d2 were not nearly enough to solve the problem. We'd focused narrowly on subquery RTEs with lateral references, but lateral references can occur in several other RTE kinds such as function RTEs. Putting the same hack into half a dozen code paths seems quite unattractive. Hence, revert the code changes (but not the test cases) from those commits and instead solve it centrally in identify_current_nestloop_params(), as Richard proposed originally. This is a bit annoying because it could mask erroneous nullingrels in nestloop params that are generated from non-LATERAL parameterized paths; but on balance I don't see a better way. Maybe at some future time we'll be motivated to find a more rigorous approach to nestloop params, but that's not happening for beta2. Richard Guo and Tom Lane Discussion: https://postgr.es/m/CAMbWs48Jcw-NvnxT23WiHP324wG44DvzcH1j4hc0Zn+3sR9cfg@mail.gmail.com
* Fix "wrong varnullingrels" for Memoize's lateral references, too.Tom Lane2023-06-13
| | | | | | | | | | | | | The issue fixed in commit bfd332b3f can also bite Memoize plans, because of the separate copies of lateral reference Vars made by paraminfo_get_equal_hashops. Apply the same hacky fix there. (In passing, clean up shaky grammar in the existing comments for this function.) Richard Guo Discussion: https://postgr.es/m/CAMbWs4-krwk0Wbd6WdufMAupuou_Ua73ijQ4XQCr1Mb5BaVtKQ@mail.gmail.com
* Fix "wrong varnullingrels" for subquery nestloop parameters.Tom Lane2023-06-12
| | | | | | | | | | | | | | | If we apply outer join identity 3 when relation C is a subquery having lateral references to relation B, then the lateral references within C continue to bear the original syntactically-correct varnullingrels marks, but that won't match what is available from the outer side of the nestloop. Compensate for that in process_subquery_nestloop_params(). This is a slightly hacky fix, but we certainly don't want to re-plan C in toto for each possible outer join order, so there's not a lot of better alternatives. Richard Guo and Tom Lane, per report from Markus Winand Discussion: https://postgr.es/m/DFBB2D25-DE97-49CA-A60E-07C881EA59A7@winand.at
* Track tlist_vinfo.varnullingrels even in non-Assert builds.Tom Lane2023-05-17
| | | | | Oversight in commit 867be9c07 (which should get reverted along with that, if we ever do revert it). Per buildfarm.
* Convert nullingrels match checks from Asserts to test-and-elog.Tom Lane2023-05-17
| | | | | | | | | | | | It seems like the code that these checks are backstopping may have a few bugs left in it. Use a test-and-elog so that the tests are performed even in non-assert builds, and so that we get something more informative than "server closed the connection" on failure. Committed separately with the idea that eventually we'll revert this. It might be awhile though. Discussion: https://postgr.es/m/3014965.1684293045@sss.pgh.pa.us
* Revert "Move PartitionPruneInfo out of plan nodes into PlannedStmt"Alvaro Herrera2023-05-04
| | | | | | | | | | | | | This reverts commit ec386948948c and its fixup 589bb816499e. This change was intended to support query planning avoiding acquisition of locks on partitions that were going to be pruned; however, the overall project took a different direction at [1] and this bit is no longer needed. Put things back the way they were as agreed in [2], to avoid unnecessary complexity. Discussion: [1] https://postgr.es/m/4191508.1674157166@sss.pgh.pa.us Discussion: [2] https://postgr.es/m/20230502175409.kcoirxczpdha26wt@alvherre.pgsql
* Fix parallel-safety marking when moving initplans to another node.Tom Lane2023-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our policy since commit ab77a5a45 has been that a plan node having any initplans is automatically not parallel-safe. (This could be relaxed, but not today.) clean_up_removed_plan_level neglected this, and could attach initplans to a parallel-safe child plan node without clearing the plan's parallel-safe flag. That could lead to "subplan was not initialized" errors at runtime, in case an initplan referenced another one and only the referencing one got transmitted to parallel workers. The fix in clean_up_removed_plan_level is trivial enough. materialize_finished_plan also moves initplans from one node to another, but it's okay because it already copies the source node's parallel_safe flag. The other place that does this kind of thing is standard_planner's hack to inject a top-level Gather when debug_parallel_query is active. But that's actually dead code given that we're correctly enforcing the "initplans aren't parallel safe" rule, so just replace it with an Assert that there are no initplans. Also improve some related comments. Normally we'd add a regression test case for this sort of bug. The mistake itself is already reached by existing tests, but there is accidentally no visible problem. The only known test case that creates an actual failure seems too indirect and fragile to justify keeping it as a regression test (not least because it fails to fail in v11, though the bug is clearly present there too). Per report from Justin Pryzby. Back-patch to all supported branches. Discussion: https://postgr.es/m/ZDVt6MaNWkRDO1LQ@telsasoft.com
* Fix setrefs.c code for adjusting partPruneInfosAlvaro Herrera2023-03-30
| | | | | | | | | | | | | | | | | | | | | We were transferring partPruneInfos from PlannerInfo into PlannerGlobal wrong, essentially relying on all of them being transferred, and adjusting their list indexes based on that. But apparently it's possible that some of them are skipped, so that strategy leads to a corrupted execution tree. Instead, adjust each Append/MergeAppend's partpruneinfo index as we copy from one list to the other, which seems safer anyway. This requires adjusting the RT offset of the RTE referenced in each partPruneInfo ahead of actually adjusting the RTE itself, which seems a bit too ad-hoc. This problem was introduced by commit ec386948948c. However, it may be that we no longer require the change introduced there, so perhaps we should revert both the present commit and that one. Problem noticed by sqlsmith. Author: Amit Langote <amitlangote09@gmail.com Discussion: https://postgr.es/m/CA+HiwqG6tbc2oadsbyyy24b2AL295XHQgyLRWghmA7u_SL1K8A@mail.gmail.com
* Fix buggy recursion in flatten_rtes_walker().Tom Lane2023-02-13
| | | | | | | | | | Must save-and-restore the context we are modifying. Oversight in commit a61b1f748. Tender Wang Discussion: https://postgr.es/m/CAHewXNnnNySD_YcKNuFpQDV2gxWA7_YLWqHmYVcyoOYxn8kY2A@mail.gmail.com Discussion: https://postgr.es/m/20230212233711.GA1316@telsasoft.com
* Remove over-optimistic Assert.Tom Lane2023-01-31
| | | | | | | | | | | | | | | | | | | In commit 2489d76c4, I'd thought it'd be safe to assert that a PlaceHolderVar appearing in a scan-level expression has empty nullingrels. However this is not so, as when we determine that a join relation is certainly empty we'll put its targetlist into a Result-with-constant-false-qual node, and nothing is done to adjust the nullingrels of the Vars or PHVs therein. (Arguably, a Result used in this way isn't really a scan-level node, but it certainly isn't an upper node either ...) It's not clear this is worth any close analysis, so let's just take out the faulty Assert. Per report from Robins Tharakan. I added a test case based on his example, just in case somebody tries to tighten this up. Discussion: https://postgr.es/m/CAEP4nAz7Enq3+DEthGG7j27DpuwSRZnW0Nh6jtNh75yErQ_nbA@mail.gmail.com
* Make Vars be outer-join-aware.Tom Lane2023-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Traditionally we used the same Var struct to represent the value of a table column everywhere in parse and plan trees. This choice predates our support for SQL outer joins, and it's really a pretty bad idea with outer joins, because the Var's value can depend on where it is in the tree: it might go to NULL above an outer join. So expression nodes that are equal() per equalfuncs.c might not represent the same value, which is a huge correctness hazard for the planner. To improve this, decorate Var nodes with a bitmapset showing which outer joins (identified by RTE indexes) may have nulled them at the point in the parse tree where the Var appears. This allows us to trust that equal() Vars represent the same value. A certain amount of klugery is still needed to cope with cases where we re-order two outer joins, but it's possible to make it work without sacrificing that core principle. PlaceHolderVars receive similar decoration for the same reason. In the planner, we include these outer join bitmapsets into the relids that an expression is considered to depend on, and in consequence also add outer-join relids to the relids of join RelOptInfos. This allows us to correctly perceive whether an expression can be calculated above or below a particular outer join. This change affects FDWs that want to plan foreign joins. They *must* follow suit when labeling foreign joins in order to match with the core planner, but for many purposes (if postgres_fdw is any guide) they'd prefer to consider only base relations within the join. To support both requirements, redefine ForeignScan.fs_relids as base+OJ relids, and add a new field fs_base_relids that's set up by the core planner. Large though it is, this commit just does the minimum necessary to install the new mechanisms and get check-world passing again. Follow-up patches will perform some cleanup. (The README additions and comments mention some stuff that will appear in the follow-up.) Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Get rid of the "new" and "old" entries in a view's rangetable.Tom Lane2023-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rule system needs "old" and/or "new" pseudo-RTEs in rule actions that are ON INSERT/UPDATE/DELETE. Historically it's put such entries into the ON SELECT rules of views as well, but those are really quite vestigial. The only thing we've used them for is to carry the view's relid forward to AcquireExecutorLocks (so that we can re-lock the view to verify it hasn't changed before re-using a plan) and to carry its relid and permissions data forward to execution-time permissions checks. What we can do instead of that is to retain these fields of the RTE_RELATION RTE for the view even after we convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of extra complication in the planner and AcquireExecutorLocks, but on the other hand we can get rid of the logic that moves that data from one place to another. The principal immediate benefit of doing this, aside from a small saving in the pg_rewrite data for views, is that these pseudo-RTEs no longer trigger ruleutils.c's heuristic about qualifying variable names when the rangetable's length is more than 1. That results in quite a number of small simplifications in regression test outputs, which are all to the good IMO. Bump catversion because we need to dump a few more fields of RTE_SUBQUERY RTEs. While those will always be zeroes anyway in stored rules (because we'd never populate them until query rewrite) they are useful for debugging, and it seems like we'd better make sure to transmit such RTEs accurately in plans sent to parallel workers. I don't think the executor actually examines these fields after startup, but someday it might. This is a second attempt at committing 1b4d280ea. The difference from the first time is that now we can add some filtering rules to AdjustUpgrade.pm to allow cross-version upgrade testing to pass despite all the cosmetic changes in CREATE VIEW outputs. Amit Langote (filtering rules by me) Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
* Revert "Get rid of the "new" and "old" entries in a view's rangetable."Tom Lane2023-01-11
| | | | | | | | | | | This reverts commit 1b4d280ea1eb7ddb2e16654d5fa16960bb959566. It's broken the buildfarm members that run cross-version-upgrade tests, because they're not prepared to deal with cosmetic differences between CREATE VIEW commands emitted by older servers and HEAD. Even if we had a solution to that, which we don't, it'd take some time to roll it out to the affected animals. This improvement isn't valuable enough to justify addressing that problem on an emergency basis, so revert it for now.
* Get rid of the "new" and "old" entries in a view's rangetable.Tom Lane2023-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rule system needs "old" and/or "new" pseudo-RTEs in rule actions that are ON INSERT/UPDATE/DELETE. Historically it's put such entries into the ON SELECT rules of views as well, but those are really quite vestigial. The only thing we've used them for is to carry the view's relid forward to AcquireExecutorLocks (so that we can re-lock the view to verify it hasn't changed before re-using a plan) and to carry its relid and permissions data forward to execution-time permissions checks. What we can do instead of that is to retain these fields of the RTE_RELATION RTE for the view even after we convert it to an RTE_SUBQUERY RTE. This requires a tiny amount of extra complication in the planner and AcquireExecutorLocks, but on the other hand we can get rid of the logic that moves that data from one place to another. The principal immediate benefit of doing this, aside from a small saving in the pg_rewrite data for views, is that these pseudo-RTEs no longer trigger ruleutils.c's heuristic about qualifying variable names when the rangetable's length is more than 1. That results in quite a number of small simplifications in regression test outputs, which are all to the good IMO. Bump catversion because we need to dump a few more fields of RTE_SUBQUERY RTEs. While those will always be zeroes anyway in stored rules (because we'd never populate them until query rewrite) they are useful for debugging, and it seems like we'd better make sure to transmit such RTEs accurately in plans sent to parallel workers. I don't think the executor actually examines these fields after startup, but someday it might. Amit Langote Discussion: https://postgr.es/m/CA+HiwqEf7gPN4Hn+LoZ4tP2q_Qt7n3vw7-6fJKOf92tSEnX6Gg@mail.gmail.com
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Rework query relation permission checkingAlvaro Herrera2022-12-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, information about the permissions to be checked on relations mentioned in a query is stored in their range table entries. So the executor must scan the entire range table looking for relations that need to have permissions checked. This can make the permission checking part of the executor initialization needlessly expensive when many inheritance children are present in the range range. While the permissions need not be checked on the individual child relations, the executor still must visit every range table entry to filter them out. This commit moves the permission checking information out of the range table entries into a new plan node called RTEPermissionInfo. Every top-level (inheritance "root") RTE_RELATION entry in the range table gets one and a list of those is maintained alongside the range table. This new list is initialized by the parser when initializing the range table. The rewriter can add more entries to it as rules/views are expanded. Finally, the planner combines the lists of the individual subqueries into one flat list that is passed to the executor for checking. To make it quick to find the RTEPermissionInfo entry belonging to a given relation, RangeTblEntry gets a new Index field 'perminfoindex' that stores the corresponding RTEPermissionInfo's index in the query's list of the latter. ExecutorCheckPerms_hook has gained another List * argument; the signature is now: typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable, List *rtePermInfos, bool ereport_on_violation); The first argument is no longer used by any in-core uses of the hook, but we leave it in place because there may be other implementations that do. Implementations should likely scan the rtePermInfos list to determine which operations to allow or deny. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
* Re-pgindent a few files.Tom Lane2022-12-04
| | | | | | Just because I'm a neatnik, and I'm currently working on code in this area. It annoys me to not be able to pgindent my patches without working around unrelated changes.
* Move PartitioPruneInfo out of plan nodes into PlannedStmtAlvaro Herrera2022-12-01
| | | | | | | | | | | | | | | | | | | | The planner will now add a given PartitioPruneInfo to PlannedStmt.partPruneInfos instead of directly to the Append/MergeAppend plan node. What gets set instead in the latter is an index field which points to the list element of PlannedStmt.partPruneInfos containing the PartitioPruneInfo belonging to the plan node. A later commit will make AcquireExecutorLocks() do the initial partition pruning to determine a minimal set of partitions to be locked when validating a plan tree and it will need to consult the PartitioPruneInfos referenced therein to do so. It would be better for the PartitioPruneInfos to be accessible directly than requiring a walk of the plan tree to find them, which is easier when it can be done by simply iterating over PlannedStmt.partPruneInfos. Author: Amit Langote <amitlangote09@gmail.com> Discussion: https://postgr.es/m/CA+HiwqFGkMSge6TgC9KQzde0ohpAycLQuV7ooitEEpbKB0O_mg@mail.gmail.com
* Estimate cost of elided SubqueryScan, Append, MergeAppend nodes better.Tom Lane2022-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setrefs.c contains logic to discard no-op SubqueryScan nodes, that is, ones that have no qual to check and copy the input targetlist unchanged. (Formally it's not very nice to be applying such optimizations so late in the planner, but there are practical reasons for it; mostly that we can't unify relids between the subquery and the parent query until we flatten the rangetable during setrefs.c.) This behavior falsifies our previous cost estimates, since we would've charged cpu_tuple_cost per row just to pass data through the node. Most of the time that's little enough to not matter, but there are cases where this effect visibly changes the plan compared to what you would've gotten with no sub-select. To improve the situation, make the callers of cost_subqueryscan tell it whether they think the targetlist is trivial. cost_subqueryscan already has the qual list, so it can check the other half of the condition easily. It could make its own determination of tlist triviality too, but doing so would be repetitive (for callers that may call it several times) or unnecessarily expensive (for callers that can determine this more cheaply than a general test would do). This isn't a 100% solution, because createplan.c also does things that can falsify any earlier estimate of whether the tlist is trivial. However, it fixes nearly all cases in practice, if results for the regression tests are anything to go by. setrefs.c also contains logic to discard no-op Append and MergeAppend nodes. We did have knowledge of that behavior at costing time, but somebody failed to update it when a check on parallel-awareness was added to the setrefs.c logic. Fix that while we're here. These changes result in two minor changes in query plans shown in our regression tests. Neither is relevant to the purposes of its test case AFAICT. Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/2581077.1651703520@sss.pgh.pa.us
* Wrap overly long linesAlvaro Herrera2022-07-19
| | | | | | | | Reported by Richard Guo. Reviewed-by: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/CAMbWs4-3ywL_tPHJKk-Vvzr-tBaR--b6XxGGm8Xe7vsG38AWog@mail.gmail.com
* Improve comments for trivial_subqueryscan().Etsuro Fujita2022-06-09
| | | | | | | | | | | | | | | | This function can be called from mark_async_capable_plan(), a helper function for create_append_plan(), before set_subqueryscan_references(), to determine the triviality of a SubqueryScan that is a child of an Append plan node, which is done before doing finalize_plan() on the SubqueryScan (if necessary) and set_plan_references() on the subplan, unlike when called from set_subqueryscan_references(). The reason why this is safe wouldn't be that obvious, so add comments explaining this. Follow-up for commit c2bb02bc2. Reviewed by Zhihong Yu. Discussion: https://postgr.es/m/CAPmGK17%2BGiJBthC6va7%2B9n6t75e-M1N0U18YB2G1B%2BE5OdrNTA%40mail.gmail.com
* Change mechanism to set up source targetlist in MERGEAlvaro Herrera2022-04-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were setting MERGE source subplan's targetlist by expanding the individual attributes of the source relation completely, early in the parse analysis phase. This failed to work when the condition of an action included a whole-row reference, causing setrefs.c to error out with ERROR: variable not found in subplan target lists because at that point there is nothing to resolve the whole-row reference with. We can fix this by having preprocess_targetlist expand the source targetlist for Vars required from the source rel by all actions. Moreover, by using this expansion mechanism we can do away with the targetlist expansion in transformMergeStmt, which is good because then we no longer pull in columns that aren't needed for anything. Add a test case for the problem. While at it, remove some redundant code in preprocess_targetlist(): MERGE was doing separately what is already being done for UPDATE/DELETE, so we can just rely on the latter and remove the former. (The handling of inherited rels was different for MERGE, but that was a no-longer- necessary hack.) Fix outdated, related comments for fix_join_expr also. Author: Richard Guo <guofenglinux@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Joe Wildish <joe@lateraljoin.com> Discussion: https://postgr.es/m/fab3b90a-914d-46a9-beb0-df011ee39ee5@www.fastmail.com
* Teach planner and executor about monotonic window funcsDavid Rowley2022-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
* Allow asynchronous execution in more cases.Etsuro Fujita2022-04-06
| | | | | | | | | | | | | | | | In commit 27e1f1456, create_append_plan() only allowed the subplan created from a given subpath to be executed asynchronously when it was an async-capable ForeignPath. To extend coverage, this patch handles cases when the given subpath includes some other Path types as well that can be omitted in the plan processing, such as a ProjectionPath directly atop an async-capable ForeignPath, allowing asynchronous execution in partitioned-scan/partitioned-join queries with non-Var tlist expressions and more UNION queries. Andrey Lepikhov and Etsuro Fujita, reviewed by Alexander Pyhalov and Zhihong Yu. Discussion: https://postgr.es/m/659c37a8-3e71-0ff2-394c-f04428c76f08%40postgrespro.ru
* Add support for MERGE SQL commandAlvaro Herrera2022-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise require multiple PL statements. For example, MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular tables, partitioned tables and inheritance hierarchies, including column and row security enforcement, as well as support for row and statement triggers and transition tables therein. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used from PL/pgSQL. MERGE does not support targetting updatable views or foreign tables, and RETURNING clauses are not allowed either. These limitations are likely fixable with sufficient effort. Rewrite rules are also not supported, but it's not clear that we'd want to support them. Author: Pavan Deolasee <pavan.deolasee@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Amit Langote <amitlangote09@gmail.com> Author: Simon Riggs <simon.riggs@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions) Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions) Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
* Consider parallel awareness when removing single-child AppendsDavid Rowley2022-01-25
| | | | | | | | | | | | | | | | | 8edd0e794 added some code to remove Append and MergeAppend nodes when they contained a single child node. As it turned out, this was unsafe to do when the Append/MergeAppend was parallel_aware and the child node was not. Removing the Append/MergeAppend, in this case, could lead to the child plan being called multiple times by parallel workers when it was unsafe to do so. Here we fix this by just not removing the Append/MergeAppend when the parallel_aware flag of the parent and child node don't match. Reported-by: Yura Sokolov Bug: #17335 Discussion: https://postgr.es/m/b59605fecb20ba9ea94e70ab60098c237c870628.camel%40postgrespro.ru Backpatch-through: 12, where 8edd0e794 was first introduced
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Fix index-only scan plans, take 2.Tom Lane2022-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4ace45677 failed to fix the problem fully, because the same issue of attempting to fetch a non-returnable index column can occur when rechecking the indexqual after using a lossy index operator. Moreover, it broke EXPLAIN for such indexquals (which indicates a gap in our test cases :-(). Revert the code changes of 4ace45677 in favor of adding a new field to struct IndexOnlyScan, containing a version of the indexqual that can be executed against the index-returned tuple without using any non-returnable columns. (The restrictions imposed by check_index_only guarantee this is possible, although we may have to recompute indexed expressions.) Support construction of that during setrefs.c processing by marking IndexOnlyScan.indextlist entries as resjunk if they can't be returned, rather than removing them entirely. (We could alternatively require setrefs.c to look up the IndexOptInfo again, but abusing resjunk this way seems like a reasonably safe way to avoid needing to do that.) This solution isn't great from an API-stability standpoint: if there are any extensions out there that build IndexOnlyScan structs directly, they'll be broken in the next minor releases. However, only a very invasive extension would be likely to do such a thing. There's no change in the Path representation, so typical planner extensions shouldn't have a problem. As before, back-patch to all supported branches. Discussion: https://postgr.es/m/3179992.1641150853@sss.pgh.pa.us Discussion: https://postgr.es/m/17350-b5bdcf476e5badbb@postgresql.org
* Remove arbitrary 64K-or-so limit on rangetable size.Tom Lane2021-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now the size of a query's rangetable has been limited by the constants INNER_VAR et al, which mustn't be equal to any real rangetable index. 65000 doubtless seemed like enough for anybody, and it still is orders of magnitude larger than the number of joins we can realistically handle. However, we need a rangetable entry for each child partition that is (or might be) processed by a query. Queries with a few thousand partitions are getting more realistic, so that the day when that limit becomes a problem is in sight, even if it's not here yet. Hence, let's raise the limit. Rather than just increase the values of INNER_VAR et al, this patch adopts the approach of making them small negative values, so that rangetables could theoretically become as long as INT_MAX. The bulk of the patch is concerned with changing Var.varno and some related variables from "Index" (unsigned int) to plain "int". This is basically cosmetic, with little actual effect other than to help debuggers print their values nicely. As such, I've only bothered with changing places that could actually see INNER_VAR et al, which the parser and most of the planner don't. We do have to be careful in places that are performing less/greater comparisons on varnos, but there are very few such places, other than the IS_SPECIAL_VARNO macro itself. A notable side effect of this patch is that while it used to be possible to add INNER_VAR et al to a Bitmapset, that will now draw an error. I don't see any likelihood that it wouldn't be a bug to include these fake varnos in a bitmapset of real varnos, so I think this is all to the good. Although this touches outfuncs/readfuncs, I don't think a catversion bump is required, since stored rules would never contain Vars with these fake varnos. Andrey Lepikhov and Tom Lane, after a suggestion by Peter Eisentraut Discussion: https://postgr.es/m/43c7f2f5-1e27-27aa-8c65-c91859d15190@postgrespro.ru