aboutsummaryrefslogtreecommitdiff
path: root/src/backend/nodes
Commit message (Collapse)AuthorAge
...
* meson: Add initial version of meson based build systemAndres Freund2022-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Autoconf is showing its age, fewer and fewer contributors know how to wrangle it. Recursive make has a lot of hard to resolve dependency issues and slow incremental rebuilds. Our home-grown MSVC build system is hard to maintain for developers not using Windows and runs tests serially. While these and other issues could individually be addressed with incremental improvements, together they seem best addressed by moving to a more modern build system. After evaluating different build system choices, we chose to use meson, to a good degree based on the adoption by other open source projects. We decided that it's more realistic to commit a relatively early version of the new build system and mature it in tree. This commit adds an initial version of a meson based build system. It supports building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD, Solaris and Windows (however only gcc is supported on aix, solaris). For Windows/MSVC postgres can now be built with ninja (faster, particularly for incremental builds) and msbuild (supporting the visual studio GUI, but building slower). Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM bitcode generation, documentation adjustments) are done in subsequent commits requiring further review. Other aspects (e.g. not installing test-only extensions) are not yet addressed. When building on Windows with msbuild, builds are slower when using a visual studio version older than 2019, because those versions do not support MultiToolTask, required by meson for intra-target parallelism. The plan is to remove the MSVC specific build system in src/tools/msvc soon after reaching feature parity. However, we're not planning to remove the autoconf/make build system in the near future. Likely we're going to keep at least the parts required for PGXS to keep working around until all supported versions build with meson. Some initial help for postgres developers is at https://wiki.postgresql.org/wiki/Meson With contributions from Thomas Munro, John Naylor, Stone Tickle and others. Author: Andres Freund <andres@anarazel.de> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Author: Peter Eisentraut <peter@eisentraut.org> Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
* Revise tree-walk APIs to improve spec compliance & silence warnings.Tom Lane2022-09-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | expression_tree_walker and allied functions have traditionally declared their callback functions as, say, "bool (*walker) ()" to allow for variation in the declared types of the callback functions' context argument. This is apparently going to be forbidden by the next version of the C standard, and the latest version of clang warns about that. In any case it's always been pretty poor for error-detection purposes, so fixing it is a good thing to do. What we want to do is change the callback argument declarations to be like "bool (*walker) (Node *node, void *context)", which is correct so far as expression_tree_walker and friends are concerned, but not change the actual callback functions. Strict compliance with the C standard would require changing them to declare their arguments as "void *context" and then cast to the appropriate context struct type internally. That'd be very invasive and it would also introduce a bunch of opportunities for future bugs, since we'd no longer have any check that the correct sort of context object is passed by outside callers or internal recursion cases. Therefore, we're just going to ignore the standard's position that "void *" isn't necessarily compatible with struct pointers. No machine built in the last forty or so years actually behaves that way, so it's not worth introducing bug hazards for compatibility with long-dead hardware. Therefore, to silence these compiler warnings, introduce a layer of macro wrappers that cast the supplied function name to the official argument type. Thanks to our use of -Wcast-function-type, this will still produce a warning if the supplied function is seriously incompatible with the required signature, without going as far as the official spec restriction does. This method fixes the problem without any need for source code changes outside nodeFuncs.h/.c. However, it is an ABI break because the physically called functions now have names ending in "_impl". Hence we can only fix it this way in HEAD. In the back branches, we'll have to settle for disabling -Wdeprecated-non-prototype. Discussion: https://postgr.es/m/CA+hUKGKpHPDTv67Y+s6yiC8KH5OXeDg6a-twWo_xznKTcG0kSA@mail.gmail.com
* Revert SQL/JSON featuresAndrew Dunstan2022-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The reverts the following and makes some associated cleanups: commit f79b803dc: Common SQL/JSON clauses commit f4fb45d15: SQL/JSON constructors commit 5f0adec25: Make STRING an unreserved_keyword. commit 33a377608: IS JSON predicate commit 1a36bc9db: SQL/JSON query functions commit 606948b05: SQL JSON functions commit 49082c2cc: RETURNING clause for JSON() and JSON_SCALAR() commit 4e34747c8: JSON_TABLE commit fadb48b00: PLAN clauses for JSON_TABLE commit 2ef6f11b0: Reduce running time of jsonb_sqljson test commit 14d3f24fa: Further improve jsonb_sqljson parallel test commit a6baa4bad: Documentation for SQL/JSON features commit b46bcf7a4: Improve readability of SQL/JSON documentation. commit 112fdb352: Fix finalization for json_objectagg and friends commit fcdb35c32: Fix transformJsonBehavior commit 4cd8717af: Improve a couple of sql/json error messages commit f7a605f63: Small cleanups in SQL/JSON code commit 9c3d25e17: Fix JSON_OBJECTAGG uniquefying bug commit a79153b7a: Claim SQL standard compliance for SQL/JSON features commit a1e7616d6: Rework SQL/JSON documentation commit 8d9f9634e: Fix errors in copyfuncs/equalfuncs support for JSON node types. commit 3c633f32b: Only allow returning string types or bytea from json_serialize commit 67b26703b: expression eval: Fix EEOP_JSON_CONSTRUCTOR and EEOP_JSONEXPR size. The release notes are also adjusted. Backpatch to release 15. Discussion: https://postgr.es/m/40d2c882-bcac-19a9-754d-4299e1d87ac7@postgresql.org
* Remove redundant spaces in _outA_Expr() outputPeter Eisentraut2022-08-15
| | | | | | | | | Since WRITE_NODE_FIELD() output always starts with a space, we don't need to go out of our way to print another space right before it. This change is only for visual appearance; the tokenizer on the reading side would read it the same way (but there is no read support for A_Expr at this time anyway).
* Add missing fields to _outConstraint()Peter Eisentraut2022-08-13
| | | | | | | | As of 897795240cfaaed724af2f53ed2c50c9862f951f, check constraints can be declared invalid. But that patch didn't update _outConstraint() to also show the relevant struct fields (which were only applicable to foreign keys before that). This currently only affects debugging output, so no impact in practice.
* Fix _outConstraint() for "identity" constraintsPeter Eisentraut2022-08-12
| | | | | | | | | The set of fields printed by _outConstraint() in the CONSTR_IDENTITY case didn't match the set of fields actually used in that case. (The code was probably uncarefully copied from the CONSTR_DEFAULT case.) Fix that by using the right set of fields. Since there is no read support for this node type, this is really just for debugging output right now, so it doesn't affect anything important.
* Add missing space in _outA_Const() outputPeter Eisentraut2022-08-11
| | | | Mistake introduced by 639a86e36aaecb84faaf941dcd0b183ba0aba9e9.
* Fix MSVC build script's check for obsolete node support functions.Tom Lane2022-08-08
| | | | | | | | | | | | | | | | Commit 964d01ae9 was a few bricks shy of a load here: the script checked whether gen_node_support.pl itself had been updated since it was last run, but not whether any of its input files had been updated. Fix that. While here, scrape the list of input files from the Makefiles rather than having a duplicate copy, as we do for most other lists of source files. In passing, improve gen_node_support.pl's error report for an incorrect file list. Per gripe from Amit Kapila. Discussion: https://postgr.es/m/CAA4eK1KQk4vP-3mTAz26h-PRUZaGu8Fc=q-ZKSajsAthH0A15w@mail.gmail.com
* Fix incorrect tests for SRFs in relation_can_be_sorted_early().Tom Lane2022-08-03
| | | | | | | | | | | | | | | | | | | | | | | | | Commit fac1b470a thought we could check for set-returning functions by testing only the top-level node in an expression tree. This is wrong in itself, and to make matters worse it encouraged others to make the same mistake, by exporting tlist.c's special-purpose IS_SRF_CALL() as a widely-visible macro. I can't find any evidence that anyone's taken the bait, but it was only a matter of time. Use expression_returns_set() instead, and stuff the IS_SRF_CALL() genie back in its bottle, this time with a warning label. I also added a couple of cross-reference comments. After a fair amount of fooling around, I've despaired of making a robust test case that exposes the bug reliably, so no test case here. (Note that the test case added by fac1b470a is itself broken, in that it doesn't notice if you remove the code change. The repro given by the bug submitter currently doesn't fail either in v15 or HEAD, though I suspect that may indicate an unrelated bug.) Per bug #17564 from Martijn van Oosterhout. Back-patch to v13, as the faulty patch was. Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
* Dump more fields when dumping planner internal data structures.Tom Lane2022-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | Commit 964d01ae9 marked a lot of fields as read_write_ignore to stay consistent with what was dumped by the manually-maintained outfuncs.c code. However, it seems that a pretty fair number of those omissions were either flat-out oversights, or a shortcut taken because hand-written code seemed like it'd be too much trouble. Let's upgrade things where it seems to make sense to dump. To do this, we need to add support to gen_node_support.pl and outfuncs.c for variable-length arrays of Node pointers. That's pretty straightforward given the model of the existing code for arrays of scalars, but I found I needed to tighten the type-recognizing regexes in gen_node_support.pl. (As they stood, they mistook "foo **" for "foo *". Make sure they're all fully anchored to prevent additional problems.) The main thing left un-done here is that a lot of partitioning-related structs are still not dumped, because they are bare structs not Nodes. I'm not sure about the wisdom of that choice ... but changing it would be fairly invasive, so it probably requires more justification than just making planner node dumps more complete. Discussion: https://postgr.es/m/1295668.1658258637@sss.pgh.pa.us
* Make serialization of Nodes' scalar-array fields more robust.Tom Lane2022-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the ability to print variable-length-array fields was first added to outfuncs.c, there was no corresponding read capability, as it was used only for debug dumps of planner-internal Nodes. Not a lot of thought seems to have been put into the output format: it's just the space-separated array elements and nothing else. Later such fields appeared in Plan nodes, and still later we grew read support so that Plans could be transferred to parallel workers, but the original text format wasn't rethought. It seems inadequate to me because (a) no cross-check is possible that we got the right number of array entries, (b) we can't tell the difference between a NULL pointer and a zero-length array, and (c) except for WRITE_INDEX_ARRAY, we'd crash if a non-zero length is specified when the pointer is NULL, a situation that can arise in some fields that we currently conveniently avoid printing. Since we're currently in a campaign to make the Node infrastructure generally more it-just-works-without-thinking-about-it, now seems like a good time to improve this. Let's adopt a format similar to that used for Lists, that is "<>" for a NULL pointer or "(item item item)" otherwise. Also retool the code to not have so many copies of the identical logic. I bumped catversion out of an abundance of caution, although I think that we don't use any such array fields in Nodes that can get into the catalogs. Discussion: https://postgr.es/m/1528424.1658272135@sss.pgh.pa.us
* Add output directory option to gen_node_support.plAndres Freund2022-07-18
| | | | | | | | | | | | This is in preparation for building postgres with meson / ninja. When building with meson, commands are run at the root of the build tree. Add an option to put build output into the appropriate place. This can be utilized by src/tools/msvc/ for a minor simplification, which also provides some coverage for the new option. Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/5e216522-ba3c-f0e6-7f97-5276d0270029@enterprisedb.com
* Tighten up parsing logic in gen_node_support.pl.Tom Lane2022-07-14
| | | | | | | | | | | | | | | | | | | | | | | Teach this script to handle function pointer fields honestly. Previously they were just silently ignored, but that's not likely to be a behavior we can accept indefinitely. This mostly entails fixing it so that a field declaration spanning multiple lines can be parsed, because we have a bunch of such fields that're laid out that way. But that's a good improvement in its own right. With that change and a minor regex adjustment, the only struct it fails to parse in the node-defining headers is A_Const, because of the embedded union. The path of least resistance is to move that union declaration outside the struct. Having done those things, we can make it error out if it finds any within-struct syntax it doesn't understand, which seems like a pretty important property for robustness. This commit doesn't change the output files at all; it's just in the way of future-proofing. Discussion: https://postgr.es/m/2593369.1657759779@sss.pgh.pa.us
* Remove artificial restrictions on which node types have out/read funcs.Tom Lane2022-07-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial version of gen_node_support.pl manually excluded most utility statement node types from having out/read support, and also some raw-parse-tree-only node types. That was mostly to keep the output comparable to the old hand-maintained code. We'd like to have out/read support for utility statements, for debugging purposes and so that they can be included in new-style SQL functions; so it's time to lift that restriction. Most if not all of the previously-excluded raw-parse-tree-only node types can appear in expression subtrees of utility statements, so they have to be handled too. We don't quite have full read support yet; certain custom_read_write node types need to have their handwritten read functions implemented before that will work. Doing this allows us to drop the previous hack in _outQuery to not dump the utilityStmt field in most cases, which means we no longer need manually-maintained out/read functions for Query, so get rid of those in favor of auto-generating them. Fix a couple of omissions in gen_node_support.pl that are exposed through having to handle more node types. catversion bump forced because somebody was sloppy about the field order in the manually-maintained Query out/read functions. (Committers should note that almost all changes in parsenodes.h are now grounds for a catversion bump.)
* Fix XID list support some moreAlvaro Herrera2022-07-13
| | | | | | | | | Read/out support in 5ca0fe5c8ad7 was missing/incomplete, per Tom Lane. Again, as far as core is concerned, this is not only dead code but also untested; however, third parties may come to rely on it, so the standard features should work. Discussion: https://postgr.es/m/1548311.1657636605@sss.pgh.pa.us
* Tidy up code in get_cheapest_group_keys_order()David Rowley2022-07-13
| | | | | | | | | | | | | | | | | | | | | | | There are a few things that we could do a little better within get_cheapest_group_keys_order(): 1. We should be using list_free() rather than pfree() on a List. 2. We should use for_each_from() instead of manually coding a for loop to skip the first n elements of a List 3. list_truncate(list_copy(...), n) is not a great way to copy the first n elements of a list. Let's invent list_copy_head() for that. That way we don't need to copy the entire list just to truncate it directly afterwards. 4. We can simplify finding the cheapest cost by setting the cheapest cost variable to DBL_MAX. That allows us to skip special-casing the initial iteration of the loop. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvrGyL3ft8waEkncG9y5HDMu5TFFJB1paoTC8zi9YK97Nw@mail.gmail.com Backpatch-through: 15, where get_cheapest_group_keys_order was added.
* Add defenses against unexpected changes in the NodeTag enum list.Tom Lane2022-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | Having different build systems producing different contents of the NodeTag enum would be catastrophic for extension ABI stability. But that ordering depends on the order in which gen_node_support.pl processes its input files. It seems too fragile to let the Makefiles, MSVC build scripts, and soon meson build scripts all set this order independently. As a klugy but serviceable solution, put a canonical copy of the file list into gen_node_support.pl itself, and check that against the files given on the command line. Also, while it's fine to add and delete node tags during development, we must not let the assigned NodeTag values change unexpectedly in stable branches. Add a cross-check that can be enabled when a branch is forked off (or later, but that is a time when we're unlikely to miss doing it). It just checks that the last auto-assigned number doesn't change, which is simplistic but will catch the most likely sorts of mistakes. From time to time we do need to add a node tag in a stable branch. To support doing that without changing the branch's auto-assigned tag numbers, invent pg_node_attr(nodetag_number(VALUE)) which can be used to give such a node a hand-assigned tag above the last auto-assigned one. Discussion: https://postgr.es/m/1249010.1657574337@sss.pgh.pa.us
* Invent nodetag_only attribute for Nodes.Tom Lane2022-07-12
| | | | | | | | | | | | This allows explaining gen_node_support.pl's handling of execnodes.h and some other input files as being a shortcut for explicit marking of all their node declarations as pg_node_attr(nodetag_only). I foresee that someday we might need to be more fine-grained about that, and this change provides the infrastructure needed to do so. For now, it just allows removal of the script's klugy special case for CallContext and InlineCodeBlock. Discussion: https://postgr.es/m/75063.1657410615@sss.pgh.pa.us
* Add copy/equal support for XID listsAlvaro Herrera2022-07-12
| | | | | | | | | | | | Commit f10a025cfe97 added support for List to store Xids, but didn't handle the new type in all cases. Add some obviously necessary pieces. As far as I am aware, this is all dead code as far as core code is concerned, but it seems unacceptable not to have it in case third-party code wants to rely on this type of list. (Some parts of the List API remain unimplemented, but that can be fixed as and when needed -- see lack of list_intersection_oid, list_deduplicate_int as precedents.) Discussion: https://postgr.es/m/20220708164534.nbejhgt4ajz35p65@alvherre.pgsql
* Rationalize order of input files for gen_node_support.pl.Tom Lane2022-07-11
| | | | | | | | Per a question from Andres Freund. While here, also make the list of nodetag-only files easier to compare to the full list of input files. Discussion: https://postgr.es/m/20220710214622.haiektrjzisob6rl@awork3.anarazel.de
* Make assorted quality-of-life improvements in gen_node_support.pl.Tom Lane2022-07-09
| | | | | | | | | | Fix incorrect reporting of the location of errors (such as bogus node attributes). Add header comments to the generated files, containing copyright notices and reminders that they are generated files, as we do in other file-generating scripts. Arrange to not leave a clutter of temporary files when the script detects an error. Discussion: https://postgr.es/m/3843645.1657385930@sss.pgh.pa.us
* Doc: rearrange high-level commentary about node support coverage.Tom Lane2022-07-09
| | | | | | | | | copyfuncs.c and friends no longer seem like great places to put high-level remarks about what's covered and what isn't. Move that material to backend/nodes/README and other more-prominent places. Add back (versions of) some remarks that disappeared in 2be87f092. Discussion: https://postgr.es/m/3843645.1657385930@sss.pgh.pa.us
* Remove code sections obsoleted by node support automationPeter Eisentraut2022-07-09
| | | | | This removes the code sections that were ifdef'ed out by 964d01ae90c314eb31132c2e7712d5d9fc237331.
* Fix vpath buildPeter Eisentraut2022-07-09
|
* Automatically generate node support functionsPeter Eisentraut2022-07-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a script to automatically generate the node support functions (copy, equal, out, and read, as well as the node tags enum) from the struct definitions. For each of the four node support files, it creates two include files, e.g., copyfuncs.funcs.c and copyfuncs.switch.c, to include in the main file. All the scaffolding of the main file stays in place. I have tried to mostly make the coverage of the output match what is currently there. For example, one could now do out/read coverage of utility statement nodes, but I have manually excluded those for now. The reason is mainly that it's easier to diff the before and after, and adding a bunch of stuff like this might require a separate analysis and review. Subtyping (TidScan -> Scan) is supported. For the hard cases, you can just write a manual function and exclude generating one. For the not so hard cases, there is a way of annotating struct fields to get special behaviors. For example, pg_node_attr(equal_ignore) has the field ignored in equal functions. (In this patch, I have only ifdef'ed out the code to could be removed, mainly so that it won't constantly have merge conflicts. It will be deleted in a separate patch. All the code comments that are worth keeping from those sections have already been moved to the header files where the structs are defined.) Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/c1097590-a6a4-486a-64b1-e1f9cc0533ce%40enterprisedb.com
* Adjust node serialization tag of A_Expr for consistencyPeter Eisentraut2022-07-08
| | | | | | Changed from AEXPR to A_EXPR for consistency. Discussion: https://www.postgresql.org/message-id/2592455.1657140387%40sss.pgh.pa.us
* Remove T_Join and T_PlanPeter Eisentraut2022-07-08
| | | | | | | These are abstract node types that don't need to have a node tag defined. Discussion: https://www.postgresql.org/message-id/2592455.1657140387%40sss.pgh.pa.us
* Fix wrong field order in _readMergeWhenClause().Tom Lane2022-07-06
| | | | | | | | | | | We hadn't noticed this because it's dead code: there is no situation where we read raw parse trees from text format. So maybe the right fix is to remove the function altogether, but I'll forbear for now; it's not the only dead code in readfuncs.c, I think. Noted while comparing existing code to the results of Peter's auto-generation script.
* Change internal RelFileNode references to RelFileNumber or RelFileLocator.Robert Haas2022-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have been using the term RelFileNode to refer to either (1) the integer that is used to name the sequence of files for a certain relation within the directory set aside for that tablespace/database combination; or (2) that value plus the OIDs of the tablespace and database; or occasionally (3) the whole series of files created for a relation based on those values. Using the same name for more than one thing is confusing. Replace RelFileNode with RelFileNumber when we're talking about just the single number, i.e. (1) from above, and with RelFileLocator when we're talking about all the things that are needed to locate a relation's files on disk, i.e. (2) from above. In the places where we refer to (3) as a relfilenode, instead refer to "relation storage". Since there is a ton of SQL code in the world that knows about pg_class.relfilenode, don't change the name of that column, or of other SQL-facing things that derive their name from it. On the other hand, do adjust closely-related internal terminology. For example, the structure member names dbNode and spcNode appear to be derived from the fact that the structure itself was called RelFileNode, so change those to dbOid and spcOid. Likewise, various variables with names like rnode and relnode get renamed appropriately, according to how they're being used in context. Hopefully, this is clearer than before. It is also preparation for future patches that intend to widen the relfilenumber fields from its current width of 32 bits. Variables that store a relfilenumber are now declared as type RelFileNumber rather than type Oid; right now, these are the same, but that can now more easily be changed. Dilip Kumar, per an idea from me. Reviewed also by Andres Freund. I fixed some whitespace issues, changed a couple of words in a comment, and made one other minor correction. Discussion: http://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com Discussion: http://postgr.es/m/CAFiTN-vTe79M8uDH1yprOU64MNFE+R3ODRuA+JWf27JbhY4hJw@mail.gmail.com
* Fix errors in copyfuncs/equalfuncs support for JSON node types.Tom Lane2022-07-05
| | | | | | | | | | | Noted while comparing existing code to the output of the proposed patch to automate creation of these functions. Some of the changes are just cosmetic, but others represent real bugs. I've not attempted to analyze the user-visible impact. Back-patch to v15 where this code came in. Discussion: https://postgr.es/m/1794155.1656984188@sss.pgh.pa.us
* Implement List support for TransactionIdAlvaro Herrera2022-07-04
| | | | | | | | | | | | Use it for RelationSyncEntry->streamed_txns, which is currently using an integer list. The API support is not complete, not because it is hard to write but because it's unclear that it's worth the code space, there being so little use of XID lists. Discussion: https://postgr.es/m/202205130830.g5ntonhztspb@alvherre.pgsql Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
* Harden range_table_mutator() against null RangeTblEntry.subquery.Tom Lane2022-06-26
| | | | | | | | | | | | | | | | | | | | Commit 64919aaab made pull_up_simple_subquery set rte->subquery = NULL after doing the deed, so that we don't waste cycles copying a now-useless subquery tree around. This turns out to create a core dump hazard in range_table_mutator, which supposes that that field is never NULL. Apparently none of our own code invokes query_tree_mutator or range_table_mutator on the top Query after subquery pullup; but it wouldn't be surprising if outside code does, and anyway I'm working on a v16 patch that will need it. We can fix this cleanly by just getting rid of the special-case handling of this field and treating it more like all the rest. I think the special case might be left over from a time when QTW_DONT_COPY_QUERY was the default behavior, but that was eons ago. Thanks to Dean Rasheed for review. Discussion: https://postgr.es/m/545569.1656107045@sss.pgh.pa.us
* Rename JsonIsPredicate.value_type, fix JSON backend/nodes/ infrastructure.Tom Lane2022-05-13
| | | | | | | | | | | | | | | | | | | | | I started out with the intention to rename value_type to item_type to avoid a collision with a typedef name that appears on some platforms. Along the way, I noticed that the adjacent field "format" was not being correctly handled by the backend/nodes/ infrastructure functions: copyfuncs.c erroneously treated it as a scalar, while equalfuncs, outfuncs, and readfuncs omitted handling it at all. This looks like it might be cosmetic at the moment because the field is always NULL after parse analysis; but that's likely a bug in itself, and the code's certainly not very future-proof. Let's fix it while we can still do so without forcing an initdb on beta testers. Further study found a few other inconsistencies in the backend/nodes/ infrastructure for the recently-added JSON node types, so fix those too. catversion bumped because of potential change in stored rules. Discussion: https://postgr.es/m/526703.1652385613@sss.pgh.pa.us
* Pre-beta mechanical code beautification.Tom Lane2022-05-12
| | | | | Run pgindent, pgperltidy, and reformat-dat-files. I manually fixed a couple of comments that pgindent uglified.
* Handle NULL fields in WRITE_INDEX_ARRAYPeter Eisentraut2022-04-27
| | | | | | | | | | | | Unlike existing WRITE_*_ARRAY macros, WRITE_INDEX_ARRAY needs to handle the case that the field is NULL. We already have the convention to print NULL fields as "<>", so we do that here as well. There is currently no corresponding read function for this, so reading this back in is not implemented, but it could be if needed. Reported-by: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-LN%3DbF8f9eU2R94dJtF54DfDvBq%2BovqHnOQqbinYDrUw%40mail.gmail.com
* Use WRITE_ENUM_FIELD for enum fieldPeter Eisentraut2022-04-12
|
* Make node output prefix match node structure namePeter Eisentraut2022-04-12
| | | | as done in e58136069687b9cf29c27281e227ac397d72141d
* Teach planner and executor about monotonic window funcsDavid Rowley2022-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Window functions such as row_number() always return a value higher than the previously returned value for tuples in any given window partition. Traditionally queries such as; SELECT * FROM ( SELECT *, row_number() over (order by c) rn FROM t ) t WHERE rn <= 10; were executed fairly inefficiently. Neither the query planner nor the executor knew that once rn made it to 11 that nothing further would match the outer query's WHERE clause. It would blindly continue until all tuples were exhausted from the subquery. Here we implement means to make the above execute more efficiently. This is done by way of adding a pg_proc.prosupport function to various of the built-in window functions and adding supporting code to allow the support function to inform the planner if the window function is monotonically increasing, monotonically decreasing, both or neither. The planner is then able to make use of that information and possibly allow the executor to short-circuit execution by way of adding a "run condition" to the WindowAgg to allow it to determine if some of its execution work can be skipped. This "run condition" is not like a normal filter. These run conditions are only built using quals comparing values to monotonic window functions. For monotonic increasing functions, quals making use of the btree operators for <, <= and = can be used (assuming the window function column is on the left). You can see here that once such a condition becomes false that a monotonic increasing function could never make it subsequently true again. For monotonically decreasing functions the >, >= and = btree operators for the given type can be used for run conditions. The best-case situation for this is when there is a single WindowAgg node without a PARTITION BY clause. Here when the run condition becomes false the WindowAgg node can simply return NULL. No more tuples will ever match the run condition. It's a little more complex when there is a PARTITION BY clause. In this case, we cannot return NULL as we must still process other partitions. To speed this case up we pull tuples from the outer plan to check if they're from the same partition and simply discard them if they are. When we find a tuple belonging to another partition we start processing as normal again until the run condition becomes false or we run out of tuples to process. When there are multiple WindowAgg nodes to evaluate then this complicates the situation. For intermediate WindowAggs we must ensure we always return all tuples to the calling node. Any filtering done could lead to incorrect results in WindowAgg nodes above. For all intermediate nodes, we can still save some work when the run condition becomes false. We've no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot reference the value of these and these tuples will not appear in the final result anyway. The savings here are small in comparison to what can be saved in the top-level WingowAgg, but still worthwhile. Intermediate WindowAgg nodes never filter out tuples, but here we change WindowAgg so that the top-level WindowAgg filters out tuples that don't match the intermediate WindowAgg node's run condition. Such filters appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node. Here we add prosupport functions to allow the above to work for; row_number(), rank(), dense_rank(), count(*) and count(expr). It appears technically possible to do the same for min() and max(), however, it seems unlikely to be useful enough, so that's not done here. Bump catversion Author: David Rowley Reviewed-by: Andy Fan, Zhihong Yu Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
* Revert "Logical decoding of sequences"Tomas Vondra2022-04-07
| | | | | | | | | | | | | | | | | | | | | This reverts a sequence of commits, implementing features related to logical decoding and replication of sequences: - 0da92dc530c9251735fc70b20cd004d9630a1266 - 80901b32913ffa59bf157a4d88284b2b3a7511d9 - b779d7d8fdae088d70da5ed9fcd8205035676df3 - d5ed9da41d96988d905b49bebb273a9b2d6e2915 - a180c2b34de0989269fdb819bff241a249bf5380 - 75b1521dae1ff1fde17fda2e30e591f2e5d64b6a - 2d2232933b02d9396113662e44dca5f120d6830e - 002c9dd97a0c874fd1693a570383e2dd38cd40d5 - 05843b1aa49df2ecc9b97c693b755bd1b6f856a9 The implementation has issues, mostly due to combining transactional and non-transactional behavior of sequences. It's not clear how this could be fixed, but it'll require reworking significant part of the patch. Discussion: https://postgr.es/m/95345a19-d508-63d1-860a-f5c2f41e8d40@enterprisedb.com
* Allow asynchronous execution in more cases.Etsuro Fujita2022-04-06
| | | | | | | | | | | | | | | | In commit 27e1f1456, create_append_plan() only allowed the subplan created from a given subpath to be executed asynchronously when it was an async-capable ForeignPath. To extend coverage, this patch handles cases when the given subpath includes some other Path types as well that can be omitted in the plan processing, such as a ProjectionPath directly atop an async-capable ForeignPath, allowing asynchronous execution in partitioned-scan/partitioned-join queries with non-Var tlist expressions and more UNION queries. Andrey Lepikhov and Etsuro Fujita, reviewed by Alexander Pyhalov and Zhihong Yu. Discussion: https://postgr.es/m/659c37a8-3e71-0ff2-394c-f04428c76f08%40postgrespro.ru
* PLAN clauses for JSON_TABLEAndrew Dunstan2022-04-05
| | | | | | | | | | | | | | | | | | | | These clauses allow the user to specify how data from nested paths are joined, allowing considerable freedom in shaping the tabular output of JSON_TABLE. PLAN DEFAULT allows the user to specify the global strategies when dealing with sibling or child nested paths. The is often sufficient to achieve the necessary goal, and is considerably simpler than the full PLAN clause, which allows the user to specify the strategy to be used for each named nested path. Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
* JSON_TABLEAndrew Dunstan2022-04-04
| | | | | | | | | | | | | | | | This feature allows jsonb data to be treated as a table and thus used in a FROM clause like other tabular data. Data can be selected from the jsonb using jsonpath expressions, and hoisted out of nested structures in the jsonb to form multiple rows, more or less like an outer join. Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu (whose name I previously misspelled), Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
* RETURNING clause for JSON() and JSON_SCALAR()Andrew Dunstan2022-03-31
| | | | | | | | | | | | | | | This patch is extracted from a larger patch that allowed setting the default returned value from these functions to json or jsonb. That had problems, but this piece of it is fine. For these functions only json or jsonb can be specified in the RETURNING clause. Extracted from an original patch from Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
* SQL JSON functionsAndrew Dunstan2022-03-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | This Patch introduces three SQL standard JSON functions: JSON() (incorrectly mentioned in my commit message for f4fb45d15c) JSON_SCALAR() JSON_SERIALIZE() JSON() produces json values from text, bytea, json or jsonb values, and has facilitites for handling duplicate keys. JSON_SCALAR() produces a json value from any scalar sql value, including json and jsonb. JSON_SERIALIZE() produces text or bytea from input which containis or represents json or jsonb; For the most part these functions don't add any significant new capabilities, but they will be of use to users wanting standard compliant JSON handling. Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
* SQL/JSON query functionsAndrew Dunstan2022-03-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces the SQL/JSON functions for querying JSON data using jsonpath expressions. The functions are: JSON_EXISTS() JSON_QUERY() JSON_VALUE() All of these functions only operate on jsonb. The workaround for now is to cast the argument to jsonb. JSON_EXISTS() tests if the jsonpath expression applied to the jsonb value yields any values. JSON_VALUE() must return a single value, and an error occurs if it tries to return multiple values. JSON_QUERY() must return a json object or array, and there are various WRAPPER options for handling scalar or multi-value results. Both these functions have options for handling EMPTY and ERROR conditions. Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
* IS JSON predicateAndrew Dunstan2022-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | This patch intrdocuces the SQL standard IS JSON predicate. It operates on text and bytea values representing JSON as well as on the json and jsonb types. Each test has an IS and IS NOT variant. The tests are: IS JSON [VALUE] IS JSON ARRAY IS JSON OBJECT IS JSON SCALAR IS JSON WITH | WITHOUT UNIQUE KEYS These are mostly self-explanatory, but note that IS JSON WITHOUT UNIQUE KEYS is true whenever IS JSON is true, and IS JSON WITH UNIQUE KEYS is true whenever IS JSON is true except it IS JSON OBJECT is true and there are duplicate keys (which is never the case when applied to jsonb values). Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
* Add support for MERGE SQL commandAlvaro Herrera2022-03-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MERGE performs actions that modify rows in the target table using a source table or query. MERGE provides a single SQL statement that can conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise require multiple PL statements. For example, MERGE INTO target AS t USING source AS s ON t.tid = s.sid WHEN MATCHED AND t.balance > s.delta THEN UPDATE SET balance = t.balance - s.delta WHEN MATCHED THEN DELETE WHEN NOT MATCHED AND s.delta > 0 THEN INSERT VALUES (s.sid, s.delta) WHEN NOT MATCHED THEN DO NOTHING; MERGE works with regular tables, partitioned tables and inheritance hierarchies, including column and row security enforcement, as well as support for row and statement triggers and transition tables therein. MERGE is optimized for OLTP and is parameterizable, though also useful for large scale ETL/ELT. MERGE is not intended to be used in preference to existing single SQL commands for INSERT, UPDATE or DELETE since there is some overhead. MERGE can be used from PL/pgSQL. MERGE does not support targetting updatable views or foreign tables, and RETURNING clauses are not allowed either. These limitations are likely fixable with sufficient effort. Rewrite rules are also not supported, but it's not clear that we'd want to support them. Author: Pavan Deolasee <pavan.deolasee@gmail.com> Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Author: Amit Langote <amitlangote09@gmail.com> Author: Simon Riggs <simon.riggs@enterprisedb.com> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions) Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions) Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Reviewed-by: Japin Li <japinli@hotmail.com> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Reviewed-by: Zhihong Yu <zyu@yugabyte.com> Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
* SQL/JSON constructorsAndrew Dunstan2022-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the SQL/JSON standard constructors for JSON: JSON() JSON_ARRAY() JSON_ARRAYAGG() JSON_OBJECT() JSON_OBJECTAGG() For the most part these functions provide facilities that mimic existing json/jsonb functions. However, they also offer some useful additional functionality. In addition to text input, the JSON() function accepts bytea input, which it will decode and constuct a json value from. The other functions provide useful options for handling duplicate keys and null values. This series of patches will be followed by a consolidated documentation patch. Nikita Glukhov Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
* Common SQL/JSON clausesAndrew Dunstan2022-03-27
| | | | | | | | | | | | | | | | | This introduces some of the building blocks used by the SQL/JSON constructor and query functions. Specifically, it provides node executor and grammar support for the FORMAT JSON [ENCODING foo] clause, and values decorated with it, and for the RETURNING clause. The following SQL/JSON patches will leverage these. Nikita Glukhov (who probably deserves an award for perseverance). Reviewers have included (in no particular order) Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby. Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
* Allow specifying column lists for logical replicationTomas Vondra2022-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows specifying an optional column list when adding a table to logical replication. The column list may be specified after the table name, enclosed in parentheses. Columns not included in this list are not sent to the subscriber, allowing the schema on the subscriber to be a subset of the publisher schema. For UPDATE/DELETE publications, the column list needs to cover all REPLICA IDENTITY columns. For INSERT publications, the column list is arbitrary and may omit some REPLICA IDENTITY columns. Furthermore, if the table uses REPLICA IDENTITY FULL, column list is not allowed. The column list can contain only simple column references. Complex expressions, function calls etc. are not allowed. This restriction could be relaxed in the future. During the initial table synchronization, only columns included in the column list are copied to the subscriber. If the subscription has several publications, containing the same table with different column lists, columns specified in any of the lists will be copied. This means all columns are replicated if the table has no column list at all (which is treated as column list with all columns), or when of the publications is defined as FOR ALL TABLES (possibly IN SCHEMA that matches the schema of the table). For partitioned tables, publish_via_partition_root determines whether the column list for the root or the leaf relation will be used. If the parameter is 'false' (the default), the list defined for the leaf relation is used. Otherwise, the column list for the root partition will be used. Psql commands \dRp+ and \d <table-name> now display any column lists. Author: Tomas Vondra, Alvaro Herrera, Rahila Syed Reviewed-by: Peter Eisentraut, Alvaro Herrera, Vignesh C, Ibrar Ahmed, Amit Kapila, Hou zj, Peter Smith, Wang wei, Tang, Shi yu Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com