aboutsummaryrefslogtreecommitdiff
path: root/src/backend/commands/explain.c
Commit message (Collapse)AuthorAge
* Set query ID for inner queries of CREATE TABLE AS and DECLAREMichael Paquier2024-10-28
| | | | | | | | | | | | | | | | | | | | | Some utility statements contain queries that can be planned and executed: CREATE TABLE AS and DECLARE CURSOR. This commit adds query ID computation for the inner queries executed by these two utility commands, with and without EXPLAIN. This change leads to four new callers of JumbleQuery() and post_parse_analyze_hook() so as extensions can decide what to do with this new data. Previously, extensions relying on the query ID, like pg_stat_statements, were not able to track these nested queries as the query_id was 0. For pg_stat_statements, this commit leads to additions under !toplevel when pg_stat_statements.track is set to "all", as shown in its regression tests. The output of EXPLAIN for these two utilities gains a "Query Identifier" if compute_query_id is enabled. Author: Anthonin Bonnefoy Reviewed-by: Michael Paquier, Jian He Discussion: https://postgr.es/m/CAO6_XqqM6S9bQ2qd=75W+yKATwoazxSNhv5sjW06fjGAtHbTUA@mail.gmail.com
* Adjust EXPLAIN's output for disabled nodesDavid Rowley2024-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | c01743aa4 added EXPLAIN output to display the plan node's disabled_node count whenever that count is above 0. Seemingly, there weren't many people who liked that output as each parent of a disabled node would also have a "Disabled Nodes" output due to the way disabled_nodes is accumulated towards the root plan node. It was often hard and sometimes impossible to figure out which nodes were disabled from looking at EXPLAIN. You might think it would be possible to manually add up the numbers from the "Disabled Nodes" output of a given node's children to figure out if that node has a higher disabled_nodes count than its children, but that wouldn't have worked for Append and Merge Append nodes if some disabled child nodes were run-time pruned during init plan. Those children are not displayed in EXPLAIN. Here we attempt to improve this output by only showing "Disabled: true" against only the nodes which are explicitly disabled themselves. That seems to be the output that's desired by the most people who voiced their opinion. This is done by summing up the disabled_nodes of the given node's children and checking if that number is less than the disabled_nodes of the current node. This commit also fixes a bug in make_sort() which was neglecting to set the Sort's disabled_nodes field. This should have copied what was done in cost_sort(), but it hadn't been updated. With the new output, the choice to not maintain that field properly was clearly wrong as the disabled-ness of the node was attributed to the Sort's parent instead. Reviewed-by: Laurenz Albe, Alena Rybakina Discussion: https://postgr.es/m/9e4ad616bebb103ec2084bf6f724cfc739e7fabb.camel@cybertec.at
* Add memory/disk usage for more executor nodes.Tatsuo Ishii2024-09-23
| | | | | | | | | | | | | | | This commit is similar to 95d6e9af07, expanding the idea to CTE scan, table function scan and recursive union scan nodes so that the maximum tuplestore memory or disk usage is shown with EXPLAIN ANALYZE command. Also adjust show_storage_info() so that it accepts storage type and storage size arguments instead of Tuplestorestate. This allows the node types to share the formatting code using show_storage_info(). Due to this show_material_info() and show_windowagg_info() are also modified. Reviewed-by: David Rowley Discussion: https://postgr.es/m/20240918.211246.1127161704188186085.ishii%40postgresql.org
* Add memory/disk usage for Window aggregate nodes in EXPLAIN.Tatsuo Ishii2024-09-17
| | | | | | | | | | | | | This commit is similar to 1eff8279d and expands the idea to Window aggregate nodes so that users can know how much memory or disk the tuplestore used. This commit uses newly introduced tuplestore_get_stats() to inquire this information and add some additional output in EXPLAIN ANALYZE to display the information for the Window aggregate node. Reviewed-by: David Rowley, Ashutosh Bapat, Maxim Orlov, Jian He Discussion: https://postgr.es/m/20240706.202254.89740021795421286.ishii%40postgresql.org
* Adjust tuplestore stats APIDavid Rowley2024-09-12
| | | | | | | | | | | | | | | | | 1eff8279d added an API to tuplestore.c to allow callers to obtain storage telemetry data. That API wasn't quite good enough for callers that perform tuplestore_clear() as the telemetry functions only accounted for the current state of the tuplestore, not the maximums before tuplestore_clear() was called. There's a pending patch that would like to add tuplestore telemetry output to EXPLAIN ANALYZE for WindowAgg. That node type uses tuplestore_clear() before moving to the next window partition and we want to show the maximum space used, not the space used for the final partition. Reviewed-by: Tatsuo Ishii, Ashutosh Bapat Discussion: https://postgres/m/CAApHDvoY8cibGcicLV0fNh=9JVx9PANcWvhkdjBnDCc9Quqytg@mail.gmail.com
* Introduce an RTE for the grouping stepRichard Guo2024-09-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there are subqueries in the grouping expressions, each of these subqueries in the targetlist and HAVING clause is expanded into distinct SubPlan nodes. As a result, only one of these SubPlan nodes would be converted to reference to the grouping key column output by the Agg node; others would have to get evaluated afresh. This is not efficient, and with grouping sets this can cause wrong results issues in cases where they should go to NULL because they are from the wrong grouping set. Furthermore, during re-evaluation, these SubPlan nodes might use nulled column values from grouping sets, which is not correct. This issue is not limited to subqueries. For other types of expressions that are part of grouping items, if they are transformed into another form during preprocessing, they may fail to match lower target items. This can also lead to wrong results with grouping sets. To fix this issue, we introduce a new kind of RTE representing the output of the grouping step, with columns that are the Vars or expressions being grouped on. In the parser, we replace the grouping expressions in the targetlist and HAVING clause with Vars referencing this new RTE, so that the output of the parser directly expresses the semantic requirement that the grouping expressions be gotten from the grouping output rather than computed some other way. In the planner, we first preprocess all the columns of this new RTE and then replace any Vars in the targetlist and HAVING clause that reference this new RTE with the underlying grouping expressions, so that we will have only one instance of a SubPlan node for each subquery contained in the grouping expressions. Bump catversion because this changes the querytree produced by the parser. Thanks to Tom Lane for the idea to invent a new kind of RTE. Per reports from Geoff Winkless, Tobias Wendorff, Richard Guo from various threads. Author: Richard Guo Reviewed-by: Ashutosh Bapat, Sutou Kouhei Discussion: https://postgr.es/m/CAMbWs4_dp7e7oTwaiZeBX8+P1rXw4ThkZxh1QG81rhu9Z47VsQ@mail.gmail.com
* Show number of disabled nodes in EXPLAIN ANALYZE output.Robert Haas2024-08-21
| | | | | | | | | | | | | | | | Now that disable_cost is not included in the cost estimate, there's no visible sign in EXPLAIN output of which plan nodes are disabled. Fix that by propagating the number of disabled nodes from Path to Plan, and then showing it in the EXPLAIN output. There is some question about whether this is a desirable change. While I personally believe that it is, it seems best to make it a separate commit, in case we decide to back out just this part, or rework it. Reviewed by Andres Freund, Heikki Linnakangas, and David Rowley. Discussion: http://postgr.es/m/CA+TgmoZ_+MS+o6NeGK2xyBv-xM+w1AfFVuHE4f_aq6ekHv7YSQ@mail.gmail.com
* Use PqMsg_* macros in more places.Nathan Bossart2024-07-17
| | | | | | | | | | | Commit f4b54e1ed9, which introduced macros for protocol characters, missed updating a few places. It also did not introduce macros for messages sent from parallel workers to their leader processes. This commit adds a new section in protocol.h for those. Author: Aleksander Alekseev Discussion: https://postgr.es/m/CAJ7c6TNTd09AZq8tGaHS3LDyH_CCnpv0oOz2wN1dGe8zekxrdQ%40mail.gmail.com Backpatch-through: 17
* Show Parallel Bitmap Heap Scan worker stats in EXPLAIN ANALYZEDavid Rowley2024-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Nodes like Memoize report the cache stats for each parallel worker, so it makes sense to show the exact and lossy pages in Parallel Bitmap Heap Scan in a similar way. Likewise, Sort shows the method and memory used for each worker. There was some discussion on whether the leader stats should include the totals for each parallel worker or not. I did some analysis on this to see what other parallel node types do and it seems only Parallel Hash does anything like this. All the rest, per what's supported by ExecParallelRetrieveInstrumentation() are consistent with each other. Author: David Geier <geidav.pg@gmail.com> Author: Heikki Linnakangas <hlinnaka@iki.fi> Author: Donghang Lin <donghanglin@gmail.com> Author: Alena Rybakina <lena.ribackina@yandex.ru> Author: David Rowley <dgrowleyml@gmail.com> Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Michael Christofides <michael@pgmustard.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Donghang Lin <donghanglin@gmail.com> Reviewed-by: Masahiro Ikeda <Masahiro.Ikeda@nttdata.com> Discussion: https://postgr.es/m/b3d80961-c2e5-38cc-6a32-61886cdf766d%40gmail.com
* Widen lossy and exact page counters for Bitmap Heap ScanDavid Rowley2024-07-08
| | | | | | | | | | | | | | | Both of these counters were using the "long" data type. On MSVC that's a 32-bit type. On modern hardware, I was able to demonstrate that we can wrap those counters with a query that only takes 15 minutes to run. This issue may manifest itself either by not showing the values of the counters because they've wrapped and are less than zero, resulting in them being filtered by the > 0 checks in show_tidbitmap_info(), or bogus numbers being displayed which are modulus 2^32 of the actual number. Widen these counters to uint64. Discussion: https://postgr.es/m/CAApHDvpS_97TU+jWPc=T83WPp7vJa1dTw3mojEtAVEZOWh9bjQ@mail.gmail.com
* Fix newly introduced issue in EXPLAIN for Materialize nodesDavid Rowley2024-07-05
| | | | | | | | | | | | | | The code added in 1eff8279d was lacking a check to see if the tuplestore had been created. In nodeMaterial.c this is done by ExecMaterial() rather than by ExecInitMaterial(), so the tuplestore won't be created unless the node has been executed at least once, as demonstrated by Alexander in his report. Here we skip showing any of the new EXPLAIN ANALYZE information when the Materialize node has not been executed. Reported-by: Alexander Lakhin Discussion: https://postgr.es/m/fe7fc8fb-86e5-ecb0-3cb2-dd2c9a6c482f@gmail.com
* Add memory/disk usage for Material nodes in EXPLAINDavid Rowley2024-07-05
| | | | | | | | | | | | | | | | | | | Up until now, there was no ability to easily determine if a Material node caused the underlying tuplestore to spill to disk or even see how much memory the tuplestore used if it didn't. Here we add some new functions to tuplestore.c to query this information and add some additional output in EXPLAIN ANALYZE to display this information for the Material node. There are a few other executor node types that use tuplestores, so we could also consider adding these details to the EXPLAIN ANALYZE for those nodes too. Let's consider those independently from this. Having the tuplestore.c infrastructure in to allow that is step 1. Author: David Rowley Reviewed-by: Matthias van de Meent, Dmitry Dolgov Discussion: https://postgr.es/m/CAApHDvp5Py9g4Rjq7_inL3-MCK1Co2CRt_YWFwTU2zfQix0p4A@mail.gmail.com
* Support "Right Semi Join" plan shapesRichard Guo2024-07-05
| | | | | | | | | | | | | | | | | | | | | | | Hash joins can support semijoin with the LHS input on the right, using the existing logic for inner join, combined with the assurance that only the first match for each inner tuple is considered, which can be achieved by leveraging the HEAP_TUPLE_HAS_MATCH flag. This can be very useful in some cases since we may now have the option to hash the smaller table instead of the larger. Merge join could likely support "Right Semi Join" too. However, the benefit of swapping inputs tends to be small here, so we do not address that in this patch. Note that this patch also modifies a test query in join.sql to ensure it continues testing as intended. With this patch the original query would result in a right-semi-join rather than semi-join, compromising its original purpose of testing the fix for neqjoinsel's behavior for semi-joins. Author: Richard Guo Reviewed-by: wenhui qiu, Alena Rybakina, Japin Li Discussion: https://postgr.es/m/CAMbWs4_X1mN=ic+SxcyymUqFx9bB8pqSLTGJ-F=MHy4PW3eRXw@mail.gmail.com
* Fix some inconsistencies in EXPLAIN outputDavid Rowley2024-05-16
| | | | | | | | | | | | | | | | | | | | | | | 06286709e added a SERIALIZE option to EXPLAIN which included showing the amount of kilobytes serialized. The calculation to convert bytes into kilobytes wasn't consistent with how that's done in the rest of EXPLAIN. Traditionally we round up to the nearest kB, but the new code rounded to the nearest kB. To fix this, invent a macro that does the conversion and use that macro everywhere that requires this conversion. Additionally, 5de890e36 added EXPLAIN (MEMORY) but included the memory sizes in bytes. Convert these values to kilobytes to align with the other memory related outputs. In passing, swap out a "long" type in show_hash_info() and use a uint64 instead. We do support platforms where sizeof(Size) == 8 and sizeof(long) == 4, so using a long there is questionable. Reported-by: jian he Reviewed-by: jian he Discussion: https://www.postgresql.org/message-id/CACJufxE4Sp7xvgOwhqtFx5hS85AxMKobPWDo-xZHZVTpK3EBjA@mail.gmail.com
* Fix incorrect format placeholdersPeter Eisentraut2024-04-09
|
* Add basic JSON_TABLE() functionalityAmit Langote2024-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | JSON_TABLE() allows JSON data to be converted into a relational view and thus used, for example, in a FROM clause, like other tabular data. Data to show in the view is selected from a source JSON object using a JSON path expression to get a sequence of JSON objects that's called a "row pattern", which becomes the source to compute the SQL/JSON values that populate the view's output columns. Column values themselves are computed using JSON path expressions applied to each of the JSON objects comprising the "row pattern", for which the SQL/JSON query functions added in 6185c9737cf4 are used. To implement JSON_TABLE() as a table function, this augments the TableFunc and TableFuncScanState nodes that are currently used to support XMLTABLE() with some JSON_TABLE()-specific fields. Note that the JSON_TABLE() spec includes NESTED COLUMNS and PLAN clauses, which are required to provide more flexibility to extract data out of nested JSON objects, but they are not implemented here to keep this commit of manageable size. Author: Nikita Glukhov <n.gluhov@postgrespro.ru> Author: Teodor Sigaev <teodor@sigaev.ru> Author: Oleg Bartunov <obartunov@gmail.com> Author: Alexander Korotkov <aekorotkov@gmail.com> Author: Andrew Dunstan <andrew@dunslane.net> Author: Amit Langote <amitlangote09@gmail.com> Author: Jian He <jian.universality@gmail.com> Reviewers have included (in no particular order): Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby, Álvaro Herrera, Jian He Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
* Invent SERIALIZE option for EXPLAIN.Tom Lane2024-04-03
| | | | | | | | | | | | | | | | EXPLAIN (ANALYZE, SERIALIZE) allows collection of statistics about the volume of data emitted by a query, as well as the time taken to convert the data to the on-the-wire format. Previously there was no way to investigate this without actually sending the data to the client, in which case network transmission costs might swamp what you wanted to see. In particular this feature allows investigating the costs of de-TOASTing compressed or out-of-line data during formatting. Stepan Rutz and Matthias van de Meent, reviewed by Tomas Vondra and myself Discussion: https://postgr.es/m/ca0adb0e-fa4e-c37e-1cd7-91170b18cae1@gmx.de
* Improve EXPLAIN's display of SubPlan nodes and output parameters.Tom Lane2024-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically we've printed SubPlan expression nodes as "(SubPlan N)", which is pretty uninformative. Trying to reproduce the original SQL for the subquery is still as impractical as before, and would be mighty verbose as well. However, we can still do better than that. Displaying the "testexpr" when present, and adding a keyword to indicate the SubLinkType, goes a long way toward showing what's really going on. In addition, this patch gets rid of EXPLAIN's use of "$n" to represent subplan and initplan output Params. Instead we now print "(SubPlan N).colX" or "(InitPlan N).colX" to represent the X'th output column of that subplan. This eliminates confusion with the use of "$n" to represent PARAM_EXTERN Params, and it's useful for the first part of this change because it eliminates needing some other indication of which subplan is referenced by a SubPlan that has a testexpr. In passing, this adds simple regression test coverage of the ROWCOMPARE_SUBLINK code paths, which were entirely unburdened by testing before. Tom Lane and Dean Rasheed, reviewed by Aleksander Alekseev. Thanks to Chantal Keller for raising the question of whether this area couldn't be improved. Discussion: https://postgr.es/m/2838538.1705692747@sss.pgh.pa.us
* Improve support for ExplainOneQuery() hookMichael Paquier2024-03-11
| | | | | | | | | | | | | | | | | | | | There is a hook called ExplainOneQuery_hook that gives modules the possibility to plug into this code path, but, like utility.c for utility statement execution, there is no corresponding "standard" routine in the case of EXPLAIN executed for one Query. This commit adds a new standard_ExplainOneQuery() in explain.c, which is able to run explain on a non-utility Query without calling its hook. Per the feedback received from a couple of hackers, this change gives the possibility to cut a few hundred lines of code in some of the popular out-of-core modules as these maintained a copy of ExplainOneQuery(), adding custom extra information at the beginning or the end of the EXPLAIN output. Author: Mats Kindahl Reviewed-by: Aleksander Alekseev, Jelte Fennema-Nio, Andrei Lepikhov Discussion: https://postgr.es/m/CA+14427V_B4EAoC_o-iYYucRdMSOTfpuH9k-QbexffY1HYJBiA@mail.gmail.com
* Remove unused #include's from backend .c filesPeter Eisentraut2024-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | as determined by include-what-you-use (IWYU) While IWYU also suggests to *add* a bunch of #include's (which is its main purpose), this patch does not do that. In some cases, a more specific #include replaces another less specific one. Some manual adjustments of the automatic result: - IWYU currently doesn't know about includes that provide global variable declarations (like -Wmissing-variable-declarations), so those includes are being kept manually. - All includes for port(ability) headers are being kept for now, to play it safe. - No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the patch from exploding in size. Note that this patch touches just *.c files, so nothing declared in header files changes in hidden ways. As a small example, in src/backend/access/transam/rmgr.c, some IWYU pragma annotations are added to handle a special case there. Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
* Add EXPLAIN (MEMORY) to report planner memory consumptionAlvaro Herrera2024-01-29
| | | | | | | | | | | | | | | | | | | | This adds a new "Memory:" line under the "Planning:" group (which currently only has "Buffers:") when the MEMORY option is specified. In order to make the reporting reasonably accurate, we create a separate memory context for planner activities, to be used only when this option is given. The total amount of memory allocated by that context is reported as "allocated"; we subtract memory in the context's freelists from that and report that result as "used". We use MemoryContextStatsInternal() to obtain the quantities. The code structure to show buffer usage during planning was not in amazing shape, so I (Álvaro) modified the patch a bit to clean that up in passing. Author: Ashutosh Bapat Reviewed-by: David Rowley, Andrey Lepikhov, Jian He, Andy Fan Discussion: https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com
* Fix some typosMichael Paquier2024-01-22
| | | | | Author: Yongtao Huang Discussion: https://postgr.es/m/CAOe1Go1F99o5JsphtXdDC5bxm7AzetU8q3AxLh4AAVGKu1AzEQ@mail.gmail.com
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Add local_blk_{read|write}_time I/O timing statistics for local blocksMichael Paquier2023-10-19
| | | | | | | | | | | | | There was no I/O timing statistics for counting read and write timings on local blocks, contrary to the counterparts for temp and shared blocks. This information is available when track_io_timing is enabled. The output of EXPLAIN is updated to show this information. An update of pg_stat_statements is planned next. Author: Nazir Bilal Yavuz Reviewed-by: Robert Haas, Melanie Plageman Discussion: https://postgr.es/m/CAN55FZ19Ss279mZuqGbuUNxka0iPbLgYuOQXqAKewrjNrp27VA@mail.gmail.com
* Rename I/O timing statistics columns to shared_blk_{read|write}_timeMichael Paquier2023-10-19
| | | | | | | | | | | | | | | | | | | | These two counters, defined in BufferUsage to track respectively the time spent while reading and writing blocks have historically only tracked data related to shared buffers, when track_io_timing is enabled. An upcoming patch to add specific counters for local buffers will take advantage of this rename as it has come up that no data is currently tracked for local buffers, and tracking local and shared buffers using the same fields would be inconsistent with the treatment done for temp buffers. Renaming the existing fields clarifies what the block type of each stats field is. pg_stat_statement is updated to reflect the rename. No extension version bump is required as 5a3423ad8ee17 has done one, affecting v17~. Author: Nazir Bilal Yavuz Reviewed-by: Robert Haas, Melanie Plageman Discussion: https://postgr.es/m/CAN55FZ19Ss279mZuqGbuUNxka0iPbLgYuOQXqAKewrjNrp27VA@mail.gmail.com
* Add JIT deform_counterDaniel Gustafsson2023-09-08
| | | | | | | | | | | | | | | | | | generation_counter includes time spent on both JIT:ing expressions and tuple deforming which are configured independently via options jit_expressions and jit_tuple_deforming. As they are combined in the same counter it's not apparent what fraction of time the tuple deforming takes. This adds deform_counter dedicated to tuple deforming, which allows seeing more directly the influence jit_tuple_deforming is having on the query. The counter is exposed in EXPLAIN and pg_stat_statements bumpin pg_stat_statements to 1.11. Author: Dmitry Dolgov <9erthalion6@gmail.com> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Discussion: https://postgr.es/m/20220612091253.eegstkufdsu4kfls@erthalion.local
* Remove dependency to query text in JumbleQuery()Michael Paquier2023-06-28
| | | | | | | | | | | Since 3db72eb, the query ID of utilities is generated using the Query structure, making the use of the query string in JumbleQuery() unnecessary. This commit removes the argument "querytext" from JumbleQuery(). Reported-by: Joe Conway Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/ZJlQAWE4COFqHuAV@paquier.xyz
* Pre-beta mechanical code beautification.Tom Lane2023-05-19
| | | | | | | | | | | | | | | Run pgindent, pgperltidy, and reformat-dat-files. This set of diffs is a bit larger than typical. We've updated to pg_bsd_indent 2.1.2, which properly indents variable declarations that have multi-line initialization expressions (the continuation lines are now indented one tab stop). We've also updated to perltidy version 20230309 and changed some of its settings, which reduces its desire to add whitespace to lines to make assignments etc. line up. Going forward, that should make for fewer random-seeming changes to existing code. Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
* Support "Right Anti Join" plan shapes.Tom Lane2023-04-05
| | | | | | | | | | | | | Merge and hash joins can support antijoin with the non-nullable input on the right, using very simple combinations of their existing logic for right join and anti join. This gives the planner more freedom about how to order the join. It's particularly useful for hash join, since we may now have the option to hash the smaller table instead of the larger. Richard Guo, reviewed by Ronan Dunklau and myself Discussion: https://postgr.es/m/CAMbWs48xh9hMzXzSy3VaPzGAz+fkxXXTUbCLohX1_L8THFRm2Q@mail.gmail.com
* Simplify useless 0L constantsPeter Eisentraut2023-03-29
| | | | | | | In ancient times, these belonged to arguments or fields that were actually of type long, but now they are not anymore, so this "L" decoration is just confusing. (Some other 0L and other "L" constants remain, where they are actually associated with a long type.)
* Invent GENERIC_PLAN option for EXPLAIN.Tom Lane2023-03-24
| | | | | | | | | | | | | | | | | This provides a very simple way to see the generic plan for a parameterized query. Without this, it's necessary to define a prepared statement and temporarily change plan_cache_mode, which is a bit tedious. One thing that's a bit of a hack perhaps is that we disable execution-time partition pruning when the GENERIC_PLAN option is given. That's because the pruning code may attempt to fetch the value of one of the parameters, which would fail. Laurenz Albe, reviewed by Julien Rouhaud, Christoph Berg, Michel Pelletier, Jim Jones, and myself Discussion: https://postgr.es/m/0a29b954b10b57f0d135fe12aa0909bd41883eb0.camel@cybertec.at
* Rename force_parallel_mode to debug_parallel_queryDavid Rowley2023-02-15
| | | | | | | | | | | | | | | | | | | | force_parallel_mode is meant to be used to allow us to exercise the parallel query infrastructure to ensure that it's working as we expect. It seems some users think this GUC is for forcing the query planner into picking a parallel plan regardless of the costs. A quick look at the documentation would have made them realize that they were wrong, but the GUC is likely too conveniently named which, evidently, seems to often result in users expecting that it forces the planner into usefully parallelizing queries. Here we rename the GUC to something which casual users are less likely to mistakenly think is what they need to make their query run more quickly. For now, the old name can still be used. We'll revisit if the old name mapping can be removed once the buildfarm configs are all updated. Reviewed-by: John Naylor Discussion: https://postgr.es/m/CAApHDvrsOi92_uA7PEaHZMH-S4Xv+MGhQWA+GrP8b1kjpS1HjQ@mail.gmail.com
* Remove dead NoMovementScanDirection codeDavid Rowley2023-02-01
| | | | | | | | | | | | | | | | | | | | | | Here remove some dead code from heapgettup() and heapgettup_pagemode() which was trying to support NoMovementScanDirection scans. This code can never be reached as standard_ExecutorRun() never calls ExecutePlan with NoMovementScanDirection. Additionally, plans which were scanning an unordered index would use NoMovementScanDirection rather than ForwardScanDirection. There was no real need for this, so here we adjust this so we use ForwardScanDirection for unordered index scans. A comment in pathnodes.h claimed that NoMovementScanDirection was used for PathKey reasons, but if that was true, it no longer is, per code in build_index_paths(). This does change the non-text format of the EXPLAIN output so that unordered index scans now have a "Forward" scan direction rather than "NoMovement". The text format of EXPLAIN has not changed. Author: Melanie Plageman Reviewed-by: Tom Lane, David Rowley Discussion: https://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com
* Make Vars be outer-join-aware.Tom Lane2023-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Traditionally we used the same Var struct to represent the value of a table column everywhere in parse and plan trees. This choice predates our support for SQL outer joins, and it's really a pretty bad idea with outer joins, because the Var's value can depend on where it is in the tree: it might go to NULL above an outer join. So expression nodes that are equal() per equalfuncs.c might not represent the same value, which is a huge correctness hazard for the planner. To improve this, decorate Var nodes with a bitmapset showing which outer joins (identified by RTE indexes) may have nulled them at the point in the parse tree where the Var appears. This allows us to trust that equal() Vars represent the same value. A certain amount of klugery is still needed to cope with cases where we re-order two outer joins, but it's possible to make it work without sacrificing that core principle. PlaceHolderVars receive similar decoration for the same reason. In the planner, we include these outer join bitmapsets into the relids that an expression is considered to depend on, and in consequence also add outer-join relids to the relids of join RelOptInfos. This allows us to correctly perceive whether an expression can be calculated above or below a particular outer join. This change affects FDWs that want to plan foreign joins. They *must* follow suit when labeling foreign joins in order to match with the core planner, but for many purposes (if postgres_fdw is any guide) they'd prefer to consider only base relations within the join. To support both requirements, redefine ForeignScan.fs_relids as base+OJ relids, and add a new field fs_base_relids that's set up by the core planner. Large though it is, this commit just does the minimum necessary to install the new mechanisms and get check-world passing again. Follow-up patches will perform some cleanup. (The README additions and comments mention some stuff that will appear in the follow-up.) Patch by me; thanks to Richard Guo for review. Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
* Make auto_explain print the query identifier in verbose modeMichael Paquier2023-01-26
| | | | | | | | | | | | | | | | | | | | | | | When auto_explain.log_verbose is on, auto_explain should print in the logs plans equivalent to the EXPLAIN (VERBOSE). However, when compute_query_id is on, query identifiers were not showing up, being only handled by EXPLAIN (VERBOSE). This brings auto_explain on par with EXPLAIN regarding that. Note that like EXPLAIN, auto_explain does not show the query identifier when compute_query_id=regress. The change is done so as the choice of printing the query identifier is done in ExplainPrintPlan() rather than in ExplainOnePlan(), to avoid a duplication of the logic dealing with the query ID. auto_explain is the only in-core caller of ExplainPrintPlan(). While looking at the area, I have noticed that more consolidation between EXPLAIN and auto_explain would be in order for the logging of the plan duration and the buffer usage. This refactoring is left as a future change. Author: Atsushi Torikoshi Reviewed-by: Justin Pryzby, Julien Rouhaud Discussion: https://postgr.es/m/1ea21936981f161bccfce05765c03bee@oss.nttdata.com
* Use appendStringInfoSpaces in more placesDavid Rowley2023-01-20
| | | | | | | | | | | | | | | | | | | | | | | This adjusts a few places which were appending a string constant containing spaces onto a StringInfo. We have appendStringInfoSpaces for that job, so let's use that instead. For the change to jsonb.c's add_indent() function, appendStringInfoString was being called inside a loop to append 4 spaces on each loop. This meant that enlargeStringInfo would get called once per loop. Here it should be much more efficient to get rid of the loop and just calculate the number of spaces with "level * 4" and just append all the spaces in one go. Here we additionally adjust the appendStringInfoSpaces function so it makes use of memset rather than a while loop to apply the required spaces to the StringInfo. One of the problems with the while loop was that it was incrementing one variable and decrementing another variable once per loop. That's more work than what's required to get the job done. We may as well use memset for this rather than trying to optimize the existing loop. Some testing has shown memset is faster even for very small sizes. Discussion: https://postgr.es/m/CAApHDvp_rKkvwudBKgBHniNRg67bzXVjyvVKfX0G2zS967K43A@mail.gmail.com
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Harmonize more parameter names in bulk.Peter Geoghegan2022-09-20
| | | | | | | | | | | | | | | | Make sure that function declarations use names that exactly match the corresponding names from function definitions in optimizer, parser, utility, libpq, and "commands" code, as well as in remaining library code. Do the same for all code related to frontend programs (with the exception of pg_dump/pg_dumpall related code). Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Later commits will handle ecpg and pg_dump/pg_dumpall. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
* Revert SQL/JSON featuresAndrew Dunstan2022-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The reverts the following and makes some associated cleanups: commit f79b803dc: Common SQL/JSON clauses commit f4fb45d15: SQL/JSON constructors commit 5f0adec25: Make STRING an unreserved_keyword. commit 33a377608: IS JSON predicate commit 1a36bc9db: SQL/JSON query functions commit 606948b05: SQL JSON functions commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR() commit 4e34747c8: JSON_TABLE commit fadb48b00: PLAN clauses for JSON_TABLE commit 2ef6f11b0: Reduce running time of jsonb_sqljson test commit 14d3f24fa: Further improve jsonb_sqljson parallel test commit a6baa4bad: Documentation for SQL/JSON features commit b46bcf7a4: Improve readability of SQL/JSON documentation. commit 112fdb352: Fix finalization for json_objectagg and friends commit fcdb35c32: Fix transformJsonBehavior commit 4cd8717af: Improve a couple of sql/json error messages commit f7a605f63: Small cleanups in SQL/JSON code commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug commit a79153b7a: Claim SQL standard compliance for SQL/JSON features commit a1e7616d6: Rework SQL/JSON documentation commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types. commit 3c633f32b: Only allow returning string types or bytea from json_serialize commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size. The release notes are also adjusted. Backpatch to release 15. Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org
* Fix formatting and comment typosJohn Naylor2022-08-04
| | | | | | Justin Pryzby Discussion: https://www.postgresql.org/message-id/20220801181136.GJ15006%40telsasoft.com
* autho_explain: Add GUC to log query parametersMichael Paquier2022-07-06
| | | | | | | | | | | | | auto_explain.log_parameter_max_length is a new GUC part of the extension, similar to the corresponding core setting, that controls the inclusion of query parameters in the logged explain output. More tests are added to check the behavior of this new parameter: when parameters logged in full (the default of -1), when disabled (value of 0) and when partially truncated (value different than the two others). Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/87ee09mohb.fsf@wibble.ilmari.org
* Fix EXPLAIN MERGE output when no tuples are processedAlvaro Herrera2022-05-18
| | | | | | | | | An 'else' clause was misplaced in commit 598ac10be1c2, making zero-rows output look a bit silly. Add a test case for it. Pointed out by Tom Lane. Discussion: https://postgr.es/m/21030.1652893083@sss.pgh.pa.us
* Make EXPLAIN MERGE output format more compactAlvaro Herrera2022-05-18
| | | | | | | | | | We can use a single line to print all tuple counts that MERGE processed, for conciseness, and elide those that are zeroes. Non-text formats report all numbers, as is typical. Per comment from Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20220511163350.GL19626@telsasoft.com
* Pre-beta mechanical code beautification.Tom Lane2022-05-12
| | | | | Run pgindent, pgperltidy, and reformat-dat-files. I manually fixed a couple of comments that pgindent uglified.
* Track I/O timing for temporary file blocks in EXPLAIN (BUFFERS)Michael Paquier2022-04-08
| | | | | | | | | | | | | | | | | | | | Previously, the output of EXPLAIN (BUFFERS) option showed only the I/O timing spent reading and writing shared and local buffers. This commit adds on top of that the I/O timing for temporary buffers in the output of EXPLAIN (for spilled external sorts, hashes, materialization. etc). This can be helpful for users in cases where the I/O related to temporary buffers is the bottleneck. Like its cousin, this information is available only when track_io_timing is enabled. Playing the patch, this is showing an extra overhead of up to 1% even when using gettimeofday() as implementation for interval timings, which is slightly within the usual range noise still that's measurable. Author: Masahiko Sawada Reviewed-by: Georgios Kokolatos, Melanie Plageman, Julien Rouhaud, Ranier Vilela Discussion: https://postgr.es/m/CAD21AoAJgotTeP83p6HiAGDhs_9Fw9pZ2J=_tYTsiO5Ob-V5GQ@mail.gmail.com
* Teach planner and executor about monotonic window funcsDavid Rowley2022-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
* JSON_TABLEAndrew Dunstan2022-04-04
| | | | | | | | | | | | | | | | This feature allows jsonb data to be treated as a table and thus used in a FROM clause like other tabular data. Data can be selected from the jsonb using jsonpath expressions, and hoisted out of nested structures in the jsonb to form multiple rows, more or less like an outer join. Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu (whose name I previously misspelled), Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
* Add support for MERGE SQL commandAlvaro Herrera2022-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise require multiple PL statements. For example, MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular tables, partitioned tables and inheritance hierarchies, including column and row security enforcement, as well as support for row and statement triggers and transition tables therein. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used from PL/pgSQL. MERGE does not support targetting updatable views or foreign tables, and RETURNING clauses are not allowed either. These limitations are likely fixable with sufficient effort. Rewrite rules are also not supported, but it's not clear that we'd want to support them. Author: Pavan Deolasee <pavan.deolasee@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Amit Langote <amitlangote09@gmail.com> Author: Simon Riggs <simon.riggs@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions) Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions) Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
* Fix collection of typos in the code and the documentationMichael Paquier2022-03-15
| | | | | | | | Some words were duplicated while other places were grammatically incorrect, including one variable name in the code. Author: Otto Kekalainen, Justin Pryzby Discussion: https://postgr.es/m/7DDBEFC5-09B6-4325-B942-B563D1A24BDC@amazon.com
* Add compute_query_id = regressMichael Paquier2022-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | "regress" is a new mode added to compute_query_id aimed at facilitating regression testing when a module computing query IDs is loaded into the backend, like pg_stat_statements. It works the same way as "auto", meaning that query IDs are computed if a module enables it, except that query IDs are hidden in EXPLAIN outputs to ensure regression output stability. Like any GUCs of the kind (force_parallel_mode, etc.), this new configuration can be added to an instance's postgresql.conf, or just passed down with PGOPTIONS at command level. compute_query_id uses an enum for its set of option values, meaning that this addition ensures ABI compatibility. Using this new configuration mode allows installcheck-world to pass when running the tests on an instance with pg_stat_statements enabled, stabilizing the test output while checking the paths doing query ID computations. Reported-by: Anton Melnikov Reviewed-by: Julien Rouhaud Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz Backpatch-through: 14