aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Remove extra word from comment.Robert Haas2018-02-22
| | | | | | Etsuro Fujita Discussion: http://postgr.es/m/5A8EAF74.5010905@lab.ntt.co.jp
* postgres_fdw: Fix interaction of PHVs with child joins.Robert Haas2018-02-22
| | | | | | | | | Commit f49842d1ee31b976c681322f76025d7732e860f3 introduced the concept of a child join, but did not update this code accordingly. Ashutosh Bapat, with cosmetic changes by me Discussion: http://postgr.es/m/CAFjFpRf=J_KPOtw+bhZeURYkbizr8ufSaXg6gPEF6DKpgH-t6g@mail.gmail.com
* Avoid another valgrind complaint about write() of uninitalized bytes.Robert Haas2018-02-22
| | | | | | Peter Geoghegan, per buildfarm member skink and Andres Freund Discussion: http://postgr.es/m/20180221053426.gp72lw67yfpzkw7a@alap3.anarazel.de
* Try to stabilize EXPLAIN output in partition_check test.Robert Haas2018-02-22
| | | | | | | | | | | | | | Commit 7d8ac9814bc9bb6df2d845dbabed38d7284c7c2c adjusted these tests in the hope of preserving the plan shape, but I failed to notice that the three partitions were, on my local machine, choosing two different plan shapes. This is probably related to the fact that all three tables have exactly the same row count. Try to improve the situation by making pht1_e about half as large as the other two. Per Tom Lane and the buildfarm. Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us
* Charge cpu_tuple_cost * 0.5 for Append and MergeAppend nodes.Robert Haas2018-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, Append didn't charge anything at all, and MergeAppend charged only cpu_operator_cost, about half the value used here. This change might make MergeAppend plans slightly more likely to be chosen than before, since this commit increases the assumed cost for Append -- with default values -- by 0.005 per tuple but MergeAppend by only 0.0025 per tuple. Since the comparisons required by MergeAppend are costed separately, it's not clear why MergeAppend needs to be otherwise more expensive than Append, so hopefully this is OK. Prior to partition-wise join, it didn't really matter whether or not an Append node had any cost of its own, because every plan had to use the same number of Append or MergeAppend nodes and in the same places. Only the relative cost of Append vs. MergeAppend made a difference. Now, however, it is possible to avoid some of the Append nodes using a partition-wise join, so it's worth making an effort. Pending patches for partition-wise aggregate care too, because an Append of Aggregate nodes will incur the Append overhead fewer times than an Aggregate over an Append. Although in most cases this change will favor the use of partition-wise techniques, it does the opposite when the join cardinality is greater than the sum of the input cardinalities. Since this situation arises in an existing regression test, I [rhaas] adjusted it to keep the overall plan shape approximately the same. Jeevan Chalke, per a suggestion from David Rowley. Reviewed by Ashutosh Bapat. Some changes by me. The larger patch series of which this patch is a part was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, Rafia Sabih, and me. Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
* Repair pg_upgrade's failure to preserve relfrozenxid for matviews.Tom Lane2018-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | This oversight led to data corruption in matviews, manifesting as "could not access status of transaction" before our most recent releases, and "found xmin from before relfrozenxid" errors since then. The proximate cause of the problem seems to have been confusion between the task of preserving dropped-column status and the task of preserving frozenxid status. Those are required for distinct sets of relkinds, and the reasoning was entirely undocumented in the source code. In hopes of forestalling future errors of the same kind, try to improve the commentary in this area. In passing, also improve the remarkably unhelpful comments around pg_upgrade's set_frozenxids(). That's not actually buggy AFAICS, but good luck figuring out what it does from the old comments. Per report from Claudio Freire. It appears that bug #14852 from Alexey Ermakov is an earlier report of the same issue, and there may be other cases that we failed to identify at the time. Patch by me based on analysis by Andres Freund. The bug dates back to the introduction of matviews, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAGTBQpbrY9CdRGGhyBZ9yqY4jWaGC85rUF4X+R7d-aim=mBNsw@mail.gmail.com Discussion: https://postgr.es/m/20171013115320.28049.86457@wrigleys.postgresql.org
* Blindly attempt to adapt sepgsql regression tests.Andres Freund2018-02-20
| | | | | | | | | | | | | Commit bf6c614a2f2c58312b3be34a47e7fb7362e07bcb broke the sepgsql test due to a new invocation of the function access hook during grouping equal initialization. The new behaviour seems at least as correct as the old one, so try adapt the tests. As I've no working sepgsql setup here, this is just going from buildfarm results. Author: Andres Freund Discussion: https://postgr.es/m/20180217000337.lfsdvro3l6ccsksp@alap3.anarazel.de
* Use platform independent type for TupleTableSlot->tts_off.Andres Freund2018-02-20
| | | | | | | | | | | | | Previously tts_off was, for unknown reasons, of type long. For one that's unnecessary as tuples are restricted in length, for another long would be a bad choice of type even if that weren't the case, as it's not reliably wider than an int. Also HeapTupleHeader->t_len is a uint32. This is split off from a larger patch implementing JITed tuple deforming. Seems like an independent improvement, as tiny as it is. Author: Andres Freund
* Error message improvementPeter Eisentraut2018-02-20
|
* Fix pg_dump's logic for eliding sequence limits that match the defaults.Tom Lane2018-02-20
| | | | | | | | | | | | | | | | The previous coding here applied atoi() to strings that could represent values too large to fit in an int. If the overflowed value happened to match one of the cases it was looking for, it would drop that limit value from the output, leading to incorrect restoration of the sequence. Avoid the unsafe behavior, and also make the logic cleaner by explicitly calculating the default min/max values for the appropriate kind of sequence. Reported and patched by Alexey Bashtanov, though I whacked his patch around a bit. Back-patch to v10 where the faulty logic was added. Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7a33@imap.cc
* Adjust ALTER TABLE docs on partitioned constraintsAlvaro Herrera2018-02-20
| | | | | | | | | Move the "additional restrictions" comment to ALTER TABLE ADD CONSTRAINT instead of ADD CONSTRAINT USING INDEX; and in the latter instead indicate that partitioned tables are unsupported Noted by David G. Johnston Discussion: https://postgr.es/m/CAKFQuwY4Ld7ecxL_KAmaxwt0FUu5VcPPN2L4dh+3BeYbrdBa5g@mail.gmail.com
* Fix typoMagnus Hagander2018-02-20
| | | | Author: Masahiko Sawada
* Fix crash in pg_replication_slot_advanceAlvaro Herrera2018-02-19
| | | | | | | | | | We were trying to use a LSN variable after releasing its containing slot structure. Reported by: tushar Author: amul sul Reviewed-by: Petr Jelinek, Masahiko Sawada Discussion: https://postgr.es/m/94ba999c-f76a-0423-6523-b8d531dfe4c7@enterprisedb.com
* Fix misbehavior of CTE-used-in-a-subplan during EPQ rechecks.Tom Lane2018-02-19
| | | | | | | | | | | | | | | | | | An updating query that reads a CTE within an InitPlan or SubPlan could get incorrect results if it updates rows that are concurrently being modified. This is caused by CteScanNext supposing that nothing inside its recursive ExecProcNode call could change which read pointer is selected in the CTE's shared tuplestore. While that's normally true because of scoping considerations, it can break down if an EPQ plan tree gets built during the call, because EvalPlanQualStart builds execution trees for all subplans whether they're going to be used during the recheck or not. And it seems like a pretty shaky assumption anyway, so let's just reselect our own read pointer here. Per bug #14870 from Andrei Gorita. This has been broken since CTEs were implemented, so back-patch to all supported branches. Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org
* Fix expected outputAlvaro Herrera2018-02-19
|
* Allow UNIQUE indexes on partitioned tablesAlvaro Herrera2018-02-19
| | | | | | | | | | | | | | | If we restrict unique constraints on partitioned tables so that they must always include the partition key, then our standard approach to unique indexes already works --- each unique key is forced to exist within a single partition, so enforcing the unique restriction in each index individually is enough to have it enforced globally. Therefore we can implement unique indexes on partitions by simply removing a few restrictions (and adding others.) Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime Casanova, Amit Langote
* Remove bogus "extern" annotations on function definitions.Tom Lane2018-02-19
| | | | | | | | | While this is not illegal C, project style is to put "extern" only on declarations not definitions. David Rowley Discussion: https://postgr.es/m/CAKJS1f9RKLWXcMBQhvDYhmsMEo+ALuNgA-NE+AX5Uoke9DJ2Xg@mail.gmail.com
* Remove redundant initialization of a local variable.Tom Lane2018-02-18
| | | | | | | In what was doubtless a typo, commit bf6c614a2 introduced a duplicate initialization of a local variable. This made Coverity unhappy, as well as pretty much anybody reading the code. We don't even have a real use for the local variable, so just remove it.
* Fix StaticAssertExpr() under C++Peter Eisentraut2018-02-18
| | | | | The previous code didn't compile, because static_assert() must end with a semicolon. To fix, wrap it in a block, similar to the C code.
* Remove redundant function declarationPeter Eisentraut2018-02-18
|
* Message style fixPeter Eisentraut2018-02-18
|
* Move function comment to the right placePeter Eisentraut2018-02-17
|
* Minor comment fixPeter Eisentraut2018-02-17
|
* Refactor format_type APIs to be more modularAlvaro Herrera2018-02-17
| | | | | | | | | | | | | | Introduce a new format_type_extended, with a flags bitmask argument that can modify the default behavior. A few compatibility and readability wrappers remain: format_type_be format_type_be_qualified format_type_with_typemod while format_type_with_typemod_qualified, which had a single caller, is removed. Author: Michael Paquier, some revisions by me Discussion: 20180213035107.GA2915@paquier.xyz
* Mention trigger name in trigger testAlvaro Herrera2018-02-17
| | | | | | | This makes it more explicit exactly what is going on, for further proposed behavior changes. Discussion: https://postgr.es/m/20180214212624.hm7of76flesodamf@alvherre.pgsql
* Allow tupleslots to have a fixed tupledesc, use in executor nodes.Andres Freund2018-02-16
| | | | | | | | | | | | | | | | | | | | | The reason for doing so is that it will allow expression evaluation to optimize based on the underlying tupledesc. In particular it will allow to JIT tuple deforming together with the expression itself. For that expression initialization needs to be moved after the relevant slots are initialized - mostly unproblematic, except in the case of nodeWorktablescan.c. After doing so there's no need for ExecAssignResultType() and ExecAssignResultTypeFromTL() anymore, as all former callers have been converted to create a slot with a fixed descriptor. When creating a slot with a fixed descriptor, tts_values/isnull can be allocated together with the main slot, reducing allocation overhead and increasing cache density a bit. Author: Andres Freund Discussion: https://postgr.es/m/20171206093717.vqdxe5icqttpxs3p@alap3.anarazel.de
* Do execGrouping.c via expression eval machinery, take two.Andres Freund2018-02-16
| | | | | | | | | | | | | This has a performance benefit on own, although not hugely so. The primary benefit is that it will allow for to JIT tuple deforming and comparator invocations. Large parts of this were previously committed (773aec7aa), but the commit contained an omission around cross-type comparisons and was thus reverted. Author: Andres Freund Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de
* Fix crash when canceling parallel queryPeter Eisentraut2018-02-16
| | | | | | | | | | | | | elog(FATAL) would end up calling PortalCleanup(), which would call executor shutdown code, which could fail and crash, especially under parallel query. This was introduced by 8561e4840c81f7e345be2df170839846814fa004, which did not want to mark an active portal as failed by a normal transaction abort anymore. But we do need to do that for an elog(FATAL) exit. Introduce a variable shmem_exit_inprogress similar to the existing proc_exit_inprogress, so we can tell whether we are in the FATAL exit scenario. Reported-by: Andres Freund <andres@anarazel.de>
* Remove some inappropriate #includes.Tom Lane2018-02-16
| | | | | | | | | | | | Other header files should never #include postgres.h (nor postgres_fe.h, nor c.h), per project policy. Also, there's no need for any backend .c file to explicitly include elog.h or palloc.h, because postgres.h pulls those in already. Extracted from a larger patch by Kyotaro Horiguchi. The rest of the removals he suggests require more study, but these are no-brainers. Discussion: https://postgr.es/m/20180215.200447.209320006.horiguchi.kyotaro@lab.ntt.co.jp
* Rename enable_partition_wise_join to enable_partitionwise_joinPeter Eisentraut2018-02-16
| | | | Discussion: https://www.postgresql.org/message-id/flat/ad24e4f4-6481-066e-e3fb-6ef4a3121882%402ndquadrant.com
* Fix typo in commentMagnus Hagander2018-02-16
|
* Revert "Do execGrouping.c via expression eval machinery."Andres Freund2018-02-15
| | | | | | | | | | | | | | | This reverts commit 773aec7aa98abd38d6d9435913bb8e14e392c274. There's an unresolved issue in the reverted commit: It only creates one comparator function, but in for the nodeSubplan.c case we need more (c.f. FindTupleHashEntry vs LookupTupleHashEntry calls in nodeSubplan.c). This isn't too difficult to fix, but it's not entirely trivial either. The fact that the issue only causes breakage on 32bit systems shows that the current test coverage isn't that great. To avoid turning half the buildfarm red till those two issues are addressed, revert.
* Do execGrouping.c via expression eval machinery.Andres Freund2018-02-15
| | | | | | | | | This has a performance benefit on own, although not hugely so. The primary benefit is that it will allow for to JIT tuple deforming and comparator invocations. Author: Andres Freund Discussion: https://postgr.es/m/20171129080934.amqqkke2zjtekd4t@alap3.anarazel.de
* Fix plpgsql to enforce domain checks when returning a NULL domain value.Tom Lane2018-02-15
| | | | | | | | | | | | | | | If a plpgsql function is declared to return a domain type, and the domain's constraints forbid a null value, it was nonetheless possible to return NULL, because we didn't bother to check the constraints for a null result. I'd noticed this while fooling with domains-over-composite, but had not gotten around to fixing it immediately. Add a regression test script exercising this and various other domain cases, largely borrowed from the plpython_types test. Although this is clearly a bug fix, I'm not sure whether anyone would thank us for changing the behavior in stable branches, so I'm inclined not to back-patch.
* Doc: fix minor bug in CREATE TABLE example.Tom Lane2018-02-15
| | | | | | | | | | | One example in create_table.sgml claimed to be showing table constraint syntax, but it was really column constraint syntax due to the omission of a comma. This is both wrong and confusing, so fix it in all supported branches. Per report from neil@postgrescompare.com. Discussion: https://postgr.es/m/151871659877.1393.2431103178451978795@wrigleys.postgresql.org
* Cast to void in StaticAssertExpr, not its callers.Tom Lane2018-02-15
| | | | | | | | | | | Seems a bit silly that many (in fact all, as of today) uses of StaticAssertExpr would need to cast it to void to avoid warnings from pickier compilers. Let's just do the cast right in the macro, instead. In passing, change StaticAssertExpr to StaticAssertStmt in one place where that seems more apropos. Discussion: https://postgr.es/m/16161.1518715186@sss.pgh.pa.us
* Move the extern declaration for ExceptionalCondition into c.h.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | This is the logical conclusion of our decision to support Assert() in both frontend and backend code: it should be possible to use that after including just c.h. But as things were arranged before, if you wanted to use Assert() in code that might be compiled for either environment, you had to include postgres.h for the backend case. Let's simplify that. Per buildfarm, some of whose members started throwing warnings after commit 0c62356cc added an Assert in src/port/snprintf.c. It's possible that some other src/port files that use the stanza #ifndef FRONTEND #include "postgres.h" #else #include "postgres_fe.h" #endif could now be simplified to just say '#include "c.h"'. I have not tested for that, though, and it'd be unlikely to apply for more than a small number of them.
* Revert "Stabilize output of new regression test case".Tom Lane2018-02-14
| | | | | | | | | This effectively reverts commit 9edc97b71 (although the test is now in a different place and has different contents). We don't need that hack anymore, because since commit 4b93f5799, this test case no longer throws an error and so there's no displayed CONTEXT that could vary depending on CLOBBER_CACHE_ALWAYS. The underlying unstable-output problem isn't really gone, of course, but it no longer manifests here.
* Stabilize new plpgsql_record regression tests.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | The buildfarm's CLOBBER_CACHE_ALWAYS animals aren't happy with some of the test cases added in commit 4b93f5799. There are two different problems: * In two places, a different CONTEXT stack is shown because the error is detected in a different place, due to recompiling an expression from scratch rather than re-using a previously cached plan for it. I fixed these via the expedient of hiding the CONTEXT stack altogether. * In one place, a test expected to fail (because a cached plan hadn't been updated) actually succeeds (because the forced recompile makes it good). I couldn't think of a simple workaround for this, so I've just commented out that test step altogether. I have hopes of improving things enough that both of these kluges can be reverted eventually. The first one is the same kind of problem previously discussed at https://postgr.es/m/31545.1512924904@sss.pgh.pa.us but there was insufficient agreement about how to fix it, so we just hacked around the output instability (commit 9edc97b71). The second issue should be fixed by allowing the plan to be rebuilt when a type conflict is detected. But for today, let's just make the buildfarm green again.
* Return implementation defined value if pg_$op_s$bit_overflow overflows.Andres Freund2018-02-14
| | | | | | | | | | | | | | | | | | | Some older compilers otherwise sometimes complain about undefined values, even though the return value should not be used in the overflow case. We assume that any decent compiler will optimize away the unnecessary assignment in performance critical cases. We do not want to restrain the returned value to a specific value, e.g. 0 or the wrapped-around value, because some fast ways to implement overflow detecting math do not easily allow for that (e.g. msvc intrinsics). As the function documentation already documents the returned value in case of intrinsics to be implementation defined, no documentation has to be updated. Per complaint from Tom Lane and his buildfarm member prairiedog. Author: Andres Freund Discussion: https://postgr.es/m/18169.1513958454@sss.pgh.pa.us
* Silence assorted "variable may be used uninitialized" warnings.Tom Lane2018-02-14
| | | | | | | | | | | | | | | All of these are false positives, but in each case a fair amount of analysis is needed to see that, and it's not too surprising that not all compilers are smart enough. (In particular, in the logtape.c case, a compiler lacking the knowledge provided by the Assert would almost surely complain, so that this warning will be seen in any non-assert build.) Some of these are of long standing while others are pretty recent, but it only seems worth fixing them in HEAD. Jaime Casanova, tweaked a bit by me Discussion: https://postgr.es/m/CAJGNTeMcYAMJdPAom52dppLMtF-UnEZi0dooj==75OEv1EoBZA@mail.gmail.com
* Add an assertion that we don't pass NULL to snprintf("%s").Tom Lane2018-02-14
| | | | | | | | | | | | | | | Per commit e748e902d, we appear to have little or no coverage in the buildfarm of machines that will dump core when asked to printf a null string pointer. Let's try to improve that situation by adding an assertion that will make src/port/snprintf.c behave that way. Since it's just an assertion, it won't break anything in production builds, but it will help developers find this type of oversight. Note that while our buildfarm coverage of machines that use that snprintf implementation is pretty thin on the Unix side (apparently amounting only to gaur/pademelon), all of the MSVC critters use it. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
* Fix broken logic for reporting PL/Python function names in errcontext.Tom Lane2018-02-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | plpython_error_callback() reported the name of the function associated with the topmost PL/Python execution context. This was not merely wrong if there were nested PL/Python contexts, but it risked a core dump if the topmost one is an inline code block rather than a named function. That will have proname = NULL, and so we were passing a NULL pointer to snprintf("%s"). It seems that none of the PL/Python-testing machines in the buildfarm will dump core for that, but some platforms do, as reported by Marina Polyakova. Investigation finds that there actually is an existing regression test that used to prove that the behavior was wrong, though apparently no one had noticed that it was printing the wrong function name. It stopped showing the problem in 9.6 when we adjusted psql to not print CONTEXT by default for NOTICE messages. The problem is masked (if your platform avoids the core dump) in error cases, because PL/Python will throw away the originally generated error info in favor of a new traceback produced at the outer level. Repair by using ErrorContextCallback.arg to pass the correct context to the error callback. Add a regression test illustrating correct behavior. Back-patch to all supported branches, since they're all broken this way. Discussion: https://postgr.es/m/156b989dbc6fe7c4d3223cf51da61195@postgrespro.ru
* Support CONSTANT/NOT NULL/initial value for plpgsql composite variables.Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | These features were never implemented previously for composite or record variables ... not that the documentation admitted it, so there's no doc updates here. This also fixes some issues concerning enforcing DOMAIN NOT NULL constraints against plpgsql variables, although I'm not sure that that topic is completely dealt with. I created a new plpgsql test file for these features, and moved the one relevant existing test case into that file. Tom Lane, reviewed by Daniel Gustafsson Discussion: https://postgr.es/m/18362.1514605650@sss.pgh.pa.us
* Speed up plpgsql trigger startup by introducing "promises".Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | | Over the years we've accreted quite a few special variables that are predefined in plpgsql trigger functions. The cost of initializing these variables to their defined values turns out to be a significant part of the runtime of simple triggers; but, undoubtedly, most real-world triggers never examine the values of most of these variables. To improve matters, invent the notion of a variable that has a "promise" attached to it, specifying which of the predetermined values should be assigned to the variable if anything ever reads it. This eliminates all the unneeded startup overhead, in return for a small penalty on accesses to these variables. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
* Speed up plpgsql function startup by doing fewer pallocs.Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | | Previously, copy_plpgsql_datum did a separate palloc for each variable needing instance-local storage. In simple benchmarks this made for a noticeable fraction of the total runtime. Improve it by precalculating the space needed for all of a function's variables and doing just one palloc for all of them. In passing, remove PLPGSQL_DTYPE_EXPR from the list of plpgsql "datum" types, since in fact it has nothing in common with the others, and there is noplace that needs to discriminate on the basis of dtype between an expression and any type of datum. And add comments clarifying which datum struct fields are generic and which aren't. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/11986.1514407114@sss.pgh.pa.us
* Make plpgsql use its DTYPE_REC code paths for composite-type variables.Tom Lane2018-02-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly, DTYPE_REC was used only for variables declared as "record"; variables of named composite types used DTYPE_ROW, which is faster for some purposes but much less flexible. In particular, the ROW code paths are entirely incapable of dealing with DDL-caused changes to the number or data types of the columns of a row variable, once a particular plpgsql function has been parsed for the first time in a session. And, since the stored representation of a ROW isn't a tuple, there wasn't any easy way to deal with variables of domain-over-composite types, since the domain constraint checking code would expect the value to be checked to be a tuple. A lesser, but still real, annoyance is that ROW format cannot represent a true NULL composite value, only a row of per-field NULL values, which is not exactly the same thing. Hence, switch to using DTYPE_REC for all composite-typed variables, whether "record", named composite type, or domain over named composite type. DTYPE_ROW remains but is used only for its native purpose, to represent a fixed-at-compile-time list of variables, for instance the targets of an INTO clause. To accomplish this without taking significant performance losses, introduce infrastructure that allows storing composite-type variables as "expanded objects", similar to the "expanded array" infrastructure introduced in commit 1dc5ebc90. A composite variable's value is thereby kept (most of the time) in the form of separate Datums, so that field accesses and updates are not much more expensive than they were in the ROW format. This holds the line, more or less, on performance of variables of named composite types in field-access-intensive microbenchmarks, and makes variables declared "record" perform much better than before in similar tests. In addition, the logic involved with enforcing composite-domain constraints against updates of individual fields is in the expanded record infrastructure not plpgsql proper, so that it might be reusable for other purposes. In further support of this, introduce a typcache feature for assigning a unique-within-process identifier to each distinct tuple descriptor of interest; in particular, DDL alterations on composite types result in a new identifier for that type. This allows very cheap detection of the need to refresh tupdesc-dependent data. This improves on the "tupDescSeqNo" idea I had in commit 687f096ea: that assigned identifying sequence numbers to successive versions of individual composite types, but the numbers were not unique across different types, nor was there support for assigning numbers to registered record types. In passing, allow plpgsql functions to accept as well as return type "record". There was no good reason for the old restriction, and it was out of step with most of the other PLs. Tom Lane, reviewed by Pavel Stehule Discussion: https://postgr.es/m/8962.1514399547@sss.pgh.pa.us
* doc: pg_function_is_visible also applies to aggregates and proceduresPeter Eisentraut2018-02-13
|
* Add procedure support to pg_get_functiondefPeter Eisentraut2018-02-13
| | | | | | This also makes procedures work in psql's \ef and \sf commands. Reported-by: Pavel Stehule <pavel.stehule@gmail.com>
* Add tests for pg_get_functiondefPeter Eisentraut2018-02-13
|