aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix up ecpg's configuration so it handles "long long int" in MSVC builds.Tom Lane2018-02-27
| | | | | | | | | | | | | | | | | | | | | Although configure-based builds correctly define HAVE_LONG_LONG_INT when appropriate (in both pg_config.h and ecpg_config.h), builds using the MSVC scripts failed to do so. This currently has no impact on the backend, since it uses that symbol nowhere; but it does prevent ecpg from supporting "long long int". Fix that. Also, adjust Solution.pm so that in the constructed ecpg_config.h file, the "#if (_MSC_VER > 1200)" covers only the LONG_LONG_INT-related #defines, not the whole file. AFAICS this was a thinko on somebody's part: ENABLE_THREAD_SAFETY should always be defined in Windows builds, and in branches using USE_INTEGER_DATETIMES, the setting of that shouldn't depend on the compiler version either. If I'm wrong, I imagine the buildfarm will say so. Per bug #15080 from Jonathan Allen; issue diagnosed by Michael Meskes and Andrew Gierth. Back-patch to all supported branches. Discussion: https://postgr.es/m/151935568942.1461.14623890240535309745@wrigleys.postgresql.org
* Use the correct tuplestore read pointer in a NamedTuplestoreScan.Tom Lane2018-02-27
| | | | | | | | | | | Tom Kazimiers reported that transition tables don't work correctly when they are scanned by more than one executor node. That's because commit 18ce3a4ab allocated separate read pointers for each executor node, as it must, but failed to make them active at the appropriate times. Repair. Thomas Munro Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
* Revert renaming of int44in/int44out.Tom Lane2018-02-27
| | | | | | | This seemed like a good idea in commit be42eb9d6, but it causes more trouble than it's worth for cross-branch upgrade testing. Discussion: https://postgr.es/m/11927.1519756619@sss.pgh.pa.us
* Prevent dangling-pointer access when update trigger returns old tuple.Tom Lane2018-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | A before-update row trigger may choose to return the "new" or "old" tuple unmodified. ExecBRUpdateTriggers failed to consider the second possibility, and would proceed to free the "old" tuple even if it was the one returned, leading to subsequent access to already-deallocated memory. In debug builds this reliably leads to an "invalid memory alloc request size" failure; in production builds it might accidentally work, but data corruption is also possible. This is a very old bug. There are probably a couple of reasons it hasn't been noticed up to now. It would be more usual to return NULL if one wanted to suppress the update action; returning "old" is significantly less efficient since the update will occur anyway. Also, none of the standard PLs would ever cause this because they all returned freshly-manufactured tuples even if they were just copying "old". But commit 4b93f5799 changed that for plpgsql, making it possible to see the bug with a plpgsql trigger. Still, this is certainly legal behavior for a trigger function, so it's ExecBRUpdateTriggers's fault not plpgsql's. It seems worth creating a test case that exercises returning "old" directly with a C-language trigger; testing this through plpgsql seems unreliable because its behavior might change again. Report and fix by Rushabh Lathia; regression test case by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
* Minor cleanup of code related to partially_grouped_rel.Robert Haas2018-02-27
| | | | | | Jeevan Chalke Discussion: http://postgr.es/m/CAM2+6=X9kxQoL2ZqZ00E6asBt9z+rfyWbOmhXJ0+8fPAyMZ9Jg@mail.gmail.com
* Fix logic error in add_paths_to_partial_grouping_rel.Robert Haas2018-02-27
| | | | | | | | | | | | | Commit 3bf05e096b9f8375e640c5d7996aa57efd7f240c sometimes uses the cheapest_partial_path variable in this function to mean the cheapest one from the input rel and at other times the cheapest one from the partially grouped rel, but it never resets it, so we can end up with bad plans, leading to "ERROR: Aggref found in non-Agg plan node". Jeevan Chalke, per a report from Andreas Joseph Krogh and a separate off-list report from Rajkumar Raghuwanshi Discussion: http://postgr.es/m/CAM2+6=X9kxQoL2ZqZ00E6asBt9z+rfyWbOmhXJ0+8fPAyMZ9Jg@mail.gmail.com
* Improve regression test coverage of regress.c.Tom Lane2018-02-27
| | | | | | | | | | | | | | It's a bit silly to have test functions that aren't tested, so test them. In passing, rename int44in/int44out to city_budget_in/_out so that they match how the regression tests use them. Also, fix city_budget_out so that it emits the format city_budget_in expects to read; otherwise we'd have dump/reload failures when testing pg_dump against the regression database. (We avoided that in the past only because no data of type city_budget was actually stored anywhere.) Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
* Remove unused functions in regress.c.Tom Lane2018-02-27
| | | | | | | | | | | | | This patch removes five functions that presumably were once used in the regression tests, but haven't been so used in many years. Nonetheless we've been wasting maintenance effort on them (e.g., by converting them to V1 function protocol). I see no reason to think that reviving them would add any useful test coverage, so drop 'em. In passing, mark regress_lseg_construct static, since it's not called from outside this file. Discussion: https://postgr.es/m/29322.1519701006@sss.pgh.pa.us
* Update PartitionTupleRouting struct commentAlvaro Herrera2018-02-26
| | | | | | | Small review on edd44738bc88. Discussion: https://postgr.es/m/20180222165315.k27qfn4goskhoswj@alvherre.pgsql Reviewed-by: Robert Haas, Amit Langote
* Schema-qualify references in test_ddl_deparse test script.Tom Lane2018-02-26
| | | | | | This omission seems to be what is causing buildfarm failures on crake. Security: CVE-2018-1058
* Fix typo in internal error messagePeter Eisentraut2018-02-26
|
* Document security implications of search_path and the public schema.Noah Misch2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ability to create like-named objects in different schemas opens up the potential for users to change the behavior of other users' queries, maliciously or accidentally. When you connect to a PostgreSQL server, you should remove from your search_path any schema for which a user other than yourself or superusers holds the CREATE privilege. If you do not, other users holding CREATE privilege can redefine the behavior of your commands, causing them to perform arbitrary SQL statements under your identity. "SET search_path = ..." and "SELECT pg_catalog.set_config(...)" are not vulnerable to such hijacking, so one can use either as the first command of a session. As special exceptions, the following client applications behave as documented regardless of search_path settings and schema privileges: clusterdb createdb createlang createuser dropdb droplang dropuser ecpg (not programs it generates) initdb oid2name pg_archivecleanup pg_basebackup pg_config pg_controldata pg_ctl pg_dump pg_dumpall pg_isready pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_standby pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb vacuumlo. Not included are core client programs that run user-specified SQL commands, namely psql and pgbench. PostgreSQL encourages non-core client applications to do likewise. Document this in the context of libpq connections, psql connections, dblink connections, ECPG connections, extension packaging, and schema usage patterns. The principal defense for applications is "SELECT pg_catalog.set_config('search_path', '', false)", and the principal defense for databases is "REVOKE CREATE ON SCHEMA public FROM PUBLIC". Either one is sufficient to prevent attack. After a REVOKE, consider auditing the public schema for objects named like pg_catalog objects. Authors of SECURITY DEFINER functions use some of the same defenses, and the CREATE FUNCTION reference page already covered them thoroughly. This is a good opportunity to audit SECURITY DEFINER functions for robust security practice. Back-patch to 9.3 (all supported versions). Reviewed by Michael Paquier and Jonathan S. Katz. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058
* Empty search_path in Autovacuum and non-psql/pgbench clients.Noah Misch2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the client programs behave as documented regardless of the connect-time search_path and regardless of user-created objects. Today, a malicious user with CREATE permission on a search_path schema can take control of certain of these clients' queries and invoke arbitrary SQL functions under the client identity, often a superuser. This is exploitable in the default configuration, where all users have CREATE privilege on schema "public". This changes behavior of user-defined code stored in the database, like pg_index.indexprs and pg_extension_config_dump(). If they reach code bearing unqualified names, "does not exist" or "no schema has been selected to create in" errors might appear. Users may fix such errors by schema-qualifying affected names. After upgrading, consider watching server logs for these errors. The --table arguments of src/bin/scripts clients have been lax; for example, "vacuumdb -Zt pg_am\;CHECKPOINT" performed a checkpoint. That now fails, but for now, "vacuumdb -Zt 'pg_am(amname);CHECKPOINT'" still performs a checkpoint. Back-patch to 9.3 (all supported versions). Reviewed by Tom Lane, though this fix strategy was not his first choice. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058
* Avoid using unsafe search_path settings during dump and restore.Tom Lane2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | Historically, pg_dump has "set search_path = foo, pg_catalog" when dumping an object in schema "foo", and has also caused that setting to be used while restoring the object. This is problematic because functions and operators in schema "foo" could capture references meant to refer to pg_catalog entries, both in the queries issued by pg_dump and those issued during the subsequent restore run. That could result in dump/restore misbehavior, or in privilege escalation if a nefarious user installs trojan-horse functions or operators. This patch changes pg_dump so that it does not change the search_path dynamically. The emitted restore script sets the search_path to what was used at dump time, and then leaves it alone thereafter. Created objects are placed in the correct schema, regardless of the active search_path, by dint of schema-qualifying their names in the CREATE commands, as well as in subsequent ALTER and ALTER-like commands. Since this change requires a change in the behavior of pg_restore when processing an archive file made according to this new convention, bump the archive file version number; old versions of pg_restore will therefore refuse to process files made with new versions of pg_dump. Security: CVE-2018-1058
* Add a new upper planner relation for partially-aggregated results.Robert Haas2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | Up until now, we've abused grouped_rel->partial_pathlist as a place to store partial paths that have been partially aggregate, but that's really not correct, because a partial path for a relation is supposed to be one which produces the correct results with the addition of only a Gather or Gather Merge node, and these paths also require a Finalize Aggregate step. Instead, add a new partially_group_rel which can hold either partial paths (which need to be gathered and then have aggregation finalized) or non-partial paths (which only need to have aggregation finalized). This allows us to reuse generate_gather_paths for partially_grouped_rel instead of writing new code, so that this patch actually basically no net new code while making things cleaner, simplifying things for pending patches for partition-wise aggregate. Robert Haas and Jeevan Chalke. The larger patch series of which this patch is a part was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, Rafia Sabih, and me. Discussion: http://postgr.es/m/CA+TgmobrzFYS3+U8a_BCy3-hOvh5UyJbC18rEcYehxhpw5=ETA@mail.gmail.com Discussion: http://postgr.es/m/CA+TgmoZyQEjdBNuoG9-wC5GQ5GrO4544Myo13dVptvx+uLg9uQ@mail.gmail.com
* Un-break parallel pg_upgrade.Tom Lane2018-02-25
| | | | | | | | | | | | | | | | | Commit b3f840120 changed pg_upgrade so that it'd actually drop and re-create the template1 and postgres databases in the new cluster. That works fine, serially. With the -j option it's not so fine, because other per-database jobs might be launched while the template1 database is dropped. Since they attempt to connect there to start up, kaboom. This is the cause of the intermittent failures buildfarm member jacana has been showing for the last month; evidently it is the only BF member configured to run the pg_upgrade test with parallelism enabled. Fix by processing template1 separately before we get into the parallel sub-job launch loop. (We could alternatively have made the postgres DB be the special case, but it seems likely that template1 will contain less stuff and so we lose less parallelism with this choice.)
* Update headers of generated filesPeter Eisentraut2018-02-24
| | | | | The scripts were changed in c98c35cd084a25c6cf9b08c76de8b89facd75fe7, but the output files were not updated to reflect the script changes.
* Add current directory to Perl include pathPeter Eisentraut2018-02-24
| | | | | | Recent Perl versions don't have the current directory in the module include path anymore, so we need to add it here explicitly to make these scripts continue to work.
* Use croak instead of die in Perl code when appropriatePeter Eisentraut2018-02-24
|
* Fix thinko in in_range_float4_float8.Tom Lane2018-02-24
| | | | | I forgot the coding rule for correct use of Float8GetDatumFast. Per buildfarm.
* Add window RANGE support for float4, float8, numeric.Tom Lane2018-02-24
| | | | | | | | Commit 0a459cec9 left this for later, but since time's running out, I went ahead and took care of it. There are more data types that somebody might someday want RANGE support for, but this is enough to satisfy all expectations of the SQL standard, which just says that "numeric, datetime, and interval" types should have RANGE support.
* Check error messages in SSL testsPeter Eisentraut2018-02-24
| | | | | | | | | | | | | | In tests that check whether a connection fails, also check the error message. That makes sure that the connection was rejected for the right reason. This discovered that two tests had their connection failing for the wrong reason. One test failed because pg_hba.conf was not set up to allow that user, one test failed because the client key file did not have the right permissions. Fix those tests and add a new one that is really supposed to check the file permission issue. Reviewed-by: Michael Paquier <michael@paquier.xyz>
* Fix filtering of unsupported relations in logical replicationPeter Eisentraut2018-02-23
| | | | | | | | | | | | | | | | | In the pgoutput plugin, skip changes for relations that are not publishable, per is_publishable_class(). This concerns in particular materialized views and information_schema tables. While those relations cannot be part of a publication, per existing checks, they will be considered by a FOR ALL TABLES publication. A subscription would not actually apply changes for those relations, again per existing checks, but trying to match incoming changes to local tables on the subscriber would lead to errors if no matching local table exists. Skipping those changes on the publisher avoids sending useless changes and eliminates the error. Bug: #15044 Reported-by: Chad Trabant <chad@iris.washington.edu> Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Fix brown-paper-bag bug in commit 0a459cec96d3856f476c2db298c6b52f592894e8.Tom Lane2018-02-23
| | | | | | | | | | RANGE_OFFSET comparisons need to examine the first ORDER BY column, which isn't necessarily the first column in the incoming tuples. No idea how this slipped through initial testing. Per bug #15082 from Zhou Digoal. Discussion: https://postgr.es/m/151939899974.1461.9411971793110285476@wrigleys.postgresql.org
* Synchronize doc/ copies of src/test/examples/.Noah Misch2018-02-23
| | | | | | | This is mostly cosmetic, but it might fix build failures, on some platform, when copying from the documentation. Back-patch to 9.3 (all supported versions).
* Fix planner failures with overlapping mergejoin clauses in an outer join.Tom Lane2018-02-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given overlapping or partially redundant join clauses, for example t1 JOIN t2 ON t1.a = t2.x AND t1.b = t2.x the planner's EquivalenceClass machinery will ordinarily refactor the clauses as "t1.a = t1.b AND t1.a = t2.x", so that join processing doesn't see multiple references to the same EquivalenceClass in a list of join equality clauses. However, if the join is outer, it's incorrect to derive a restriction clause on the outer side from the join conditions, so the clause refactoring does not happen and we end up with overlapping join conditions. The code that attempted to deal with such cases had several subtle bugs, which could result in "left and right pathkeys do not match in mergejoin" or "outer pathkeys do not match mergeclauses" planner errors, if the selected join plan type was a mergejoin. (It does not appear that any actually incorrect plan could have been emitted.) The core of the problem really was failure to recognize that the outer and inner relations' pathkeys have different relationships to the mergeclause list. A join's mergeclause list is constructed by reference to the outer pathkeys, so it will always be ordered the same as the outer pathkeys, but this cannot be presumed true for the inner pathkeys. If the inner sides of the mergeclauses contain multiple references to the same EquivalenceClass ({t2.x} in the above example) then a simplistic rendering of the required inner sort order is like "ORDER BY t2.x, t2.x", but the pathkey machinery recognizes that the second sort column is redundant and throws it away. The mergejoin planning code failed to account for that behavior properly. One error was to try to generate cut-down versions of the mergeclause list from cut-down versions of the inner pathkeys in the same way as the initial construction of the mergeclause list from the outer pathkeys was done; this could lead to choosing a mergeclause list that fails to match the outer pathkeys. The other problem was that the pathkey cross-checking code in create_mergejoin_plan treated the inner and outer pathkey lists identically, whereas actually the expectations for them must be different. That led to false "pathkeys do not match" failures in some cases, and in principle could have led to failure to detect bogus plans in other cases, though there is no indication that such bogus plans could be generated. Reported by Alexander Kuzmenkov, who also reviewed this patch. This has been broken for years (back to around 8.3 according to my testing), so back-patch to all supported branches. Discussion: https://postgr.es/m/5dad9160-4632-0e47-e120-8e2082000c01@postgrespro.ru
* Revise API for partition bound search functions.Robert Haas2018-02-23
| | | | | | | | | | | | | | | | Similar to what commit b0229235564fbe3a9b1cc115ea738a07e274bf30 for a different set of functions, pass the required bits of the PartitionKey instead of the whole thing. This allows these functions to be used without needing the PartitionKey to be available. Amit Langote. The larger patch series of which this patch is a part has been reviewed and tested by Ashutosh Bapat, David Rowley, Dilip Kumar, Jesper Pedersen, Rajkumar Raghuwanshi, Beena Emerson, Kyotaro Horiguchi, Álvaro Herrera, and me, but especially and in great detail by David Rowley. Discussion: http://postgr.es/m/098b9c71-1915-1a2a-8d52-1a7a50ce79e8@lab.ntt.co.jp Discussion: http://postgr.es/m/1f6498e8-377f-d077-e791-5dc84dba2c00@lab.ntt.co.jp
* Revise API for partition_rbound_cmp/partition_rbound_datum_cmp.Robert Haas2018-02-23
| | | | | | | | | | | | | Instead of passing the PartitionKey, pass just the required bits of it. This allows these functions to be used without needing the PartitionKey to be available, which is important for several pending patches. Ashutosh Bapat, reviewed by Amit Langote, with a comment tweak by me. Discussion: http://postgr.es/m/3d835ed1-36ab-f06d-0ce8-a76a2bbf7677@lab.ntt.co.jp Discussion: http://postgr.es/m/b4d88995-094b-320c-b614-2282fae0bf6c@lab.ntt.co.jp
* Support parameters in CALLPeter Eisentraut2018-02-22
| | | | | | | To support parameters in CALL, move the parse analysis of the procedure and arguments into the global transformation phase, so that the parser hooks can be applied. And then at execution time pass the parameters from ProcessUtility on to ExecuteCallStmt.
* Remove extra words.Robert Haas2018-02-22
| | | | | | Thomas Munro Discussion: http://postgr.es/m/CAEepm=2x3NUSPed6=-wDYs39KtUU5Dw3mK_NAMWps+18FmkApQ@mail.gmail.com
* Fix perlcritic warningsPeter Eisentraut2018-02-22
|
* Add user-callable SHA-2 functionsPeter Eisentraut2018-02-22
| | | | | | | | | | | | | | Add the user-callable functions sha224, sha256, sha384, sha512. We already had these in the C code to support SCRAM, but there was no test coverage outside of the SCRAM tests. Adding these as user-callable functions allows writing some tests. Also, we have a user-callable md5 function but no more modern alternative, which led to wide use of md5 as a general-purpose hash function, which leads to occasional complaints about using md5. Also mark the existing md5 functions as leak-proof. Reviewed-by: Michael Paquier <michael@paquier.xyz>
* Be lazier about partition tuple routing.Robert Haas2018-02-22
| | | | | | | | | | | | | | | | | | | | | | | It's not necessary to fully initialize the executor data structures for partitions to which no tuples are ever routed. Consider, for example, an INSERT statement that inserts only one row: it only cares about the partition to which that one row is routed. The new function ExecInitPartitionInfo performs the initialization in question only when a particular partition is about to receive a tuple. This includes creating, validating, and saving a pointer to the ResultRelInfo, setting up for speculative insertions, translating WCOs and initializing the resulting expressions, translating returning lists and building the appropriate projection information, and setting up a tuple conversion map. One thing that's not deferred is locking the child partitions; that seems desirable but would need more thought. Still, testing shows that this makes single-row inserts significantly faster on a table with many partitions without harming the bulk-insert case. Amit Langote, reviewed by Etsuro Fujita, with a few changes by me Discussion: http://postgr.es/m/8975331d-d961-cbdd-f862-fdd3d97dc2d0@lab.ntt.co.jp
* Remove extra word from comment.Robert Haas2018-02-22
| | | | | | Etsuro Fujita Discussion: http://postgr.es/m/5A8EAF74.5010905@lab.ntt.co.jp
* Avoid another valgrind complaint about write() of uninitalized bytes.Robert Haas2018-02-22
| | | | | | Peter Geoghegan, per buildfarm member skink and Andres Freund Discussion: http://postgr.es/m/20180221053426.gp72lw67yfpzkw7a@alap3.anarazel.de
* Try to stabilize EXPLAIN output in partition_check test.Robert Haas2018-02-22
| | | | | | | | | | | | | | Commit 7d8ac9814bc9bb6df2d845dbabed38d7284c7c2c adjusted these tests in the hope of preserving the plan shape, but I failed to notice that the three partitions were, on my local machine, choosing two different plan shapes. This is probably related to the fact that all three tables have exactly the same row count. Try to improve the situation by making pht1_e about half as large as the other two. Per Tom Lane and the buildfarm. Discussion: http://postgr.es/m/25380.1519277713@sss.pgh.pa.us
* Charge cpu_tuple_cost * 0.5 for Append and MergeAppend nodes.Robert Haas2018-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, Append didn't charge anything at all, and MergeAppend charged only cpu_operator_cost, about half the value used here. This change might make MergeAppend plans slightly more likely to be chosen than before, since this commit increases the assumed cost for Append -- with default values -- by 0.005 per tuple but MergeAppend by only 0.0025 per tuple. Since the comparisons required by MergeAppend are costed separately, it's not clear why MergeAppend needs to be otherwise more expensive than Append, so hopefully this is OK. Prior to partition-wise join, it didn't really matter whether or not an Append node had any cost of its own, because every plan had to use the same number of Append or MergeAppend nodes and in the same places. Only the relative cost of Append vs. MergeAppend made a difference. Now, however, it is possible to avoid some of the Append nodes using a partition-wise join, so it's worth making an effort. Pending patches for partition-wise aggregate care too, because an Append of Aggregate nodes will incur the Append overhead fewer times than an Aggregate over an Append. Although in most cases this change will favor the use of partition-wise techniques, it does the opposite when the join cardinality is greater than the sum of the input cardinalities. Since this situation arises in an existing regression test, I [rhaas] adjusted it to keep the overall plan shape approximately the same. Jeevan Chalke, per a suggestion from David Rowley. Reviewed by Ashutosh Bapat. Some changes by me. The larger patch series of which this patch is a part was also reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, Rafia Sabih, and me. Discussion: http://postgr.es/m/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B=ZRh-rxy9qxfPA5Gw@mail.gmail.com
* Repair pg_upgrade's failure to preserve relfrozenxid for matviews.Tom Lane2018-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | This oversight led to data corruption in matviews, manifesting as "could not access status of transaction" before our most recent releases, and "found xmin from before relfrozenxid" errors since then. The proximate cause of the problem seems to have been confusion between the task of preserving dropped-column status and the task of preserving frozenxid status. Those are required for distinct sets of relkinds, and the reasoning was entirely undocumented in the source code. In hopes of forestalling future errors of the same kind, try to improve the commentary in this area. In passing, also improve the remarkably unhelpful comments around pg_upgrade's set_frozenxids(). That's not actually buggy AFAICS, but good luck figuring out what it does from the old comments. Per report from Claudio Freire. It appears that bug #14852 from Alexey Ermakov is an earlier report of the same issue, and there may be other cases that we failed to identify at the time. Patch by me based on analysis by Andres Freund. The bug dates back to the introduction of matviews, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAGTBQpbrY9CdRGGhyBZ9yqY4jWaGC85rUF4X+R7d-aim=mBNsw@mail.gmail.com Discussion: https://postgr.es/m/20171013115320.28049.86457@wrigleys.postgresql.org
* Use platform independent type for TupleTableSlot->tts_off.Andres Freund2018-02-20
| | | | | | | | | | | | | Previously tts_off was, for unknown reasons, of type long. For one that's unnecessary as tuples are restricted in length, for another long would be a bad choice of type even if that weren't the case, as it's not reliably wider than an int. Also HeapTupleHeader->t_len is a uint32. This is split off from a larger patch implementing JITed tuple deforming. Seems like an independent improvement, as tiny as it is. Author: Andres Freund
* Error message improvementPeter Eisentraut2018-02-20
|
* Fix pg_dump's logic for eliding sequence limits that match the defaults.Tom Lane2018-02-20
| | | | | | | | | | | | | | | | The previous coding here applied atoi() to strings that could represent values too large to fit in an int. If the overflowed value happened to match one of the cases it was looking for, it would drop that limit value from the output, leading to incorrect restoration of the sequence. Avoid the unsafe behavior, and also make the logic cleaner by explicitly calculating the default min/max values for the appropriate kind of sequence. Reported and patched by Alexey Bashtanov, though I whacked his patch around a bit. Back-patch to v10 where the faulty logic was added. Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7a33@imap.cc
* Fix typoMagnus Hagander2018-02-20
| | | | Author: Masahiko Sawada
* Fix crash in pg_replication_slot_advanceAlvaro Herrera2018-02-19
| | | | | | | | | | We were trying to use a LSN variable after releasing its containing slot structure. Reported by: tushar Author: amul sul Reviewed-by: Petr Jelinek, Masahiko Sawada Discussion: https://postgr.es/m/94ba999c-f76a-0423-6523-b8d531dfe4c7@enterprisedb.com
* Fix misbehavior of CTE-used-in-a-subplan during EPQ rechecks.Tom Lane2018-02-19
| | | | | | | | | | | | | | | | | | An updating query that reads a CTE within an InitPlan or SubPlan could get incorrect results if it updates rows that are concurrently being modified. This is caused by CteScanNext supposing that nothing inside its recursive ExecProcNode call could change which read pointer is selected in the CTE's shared tuplestore. While that's normally true because of scoping considerations, it can break down if an EPQ plan tree gets built during the call, because EvalPlanQualStart builds execution trees for all subplans whether they're going to be used during the recheck or not. And it seems like a pretty shaky assumption anyway, so let's just reselect our own read pointer here. Per bug #14870 from Andrei Gorita. This has been broken since CTEs were implemented, so back-patch to all supported branches. Discussion: https://postgr.es/m/20171024155358.1471.82377@wrigleys.postgresql.org
* Fix expected outputAlvaro Herrera2018-02-19
|
* Allow UNIQUE indexes on partitioned tablesAlvaro Herrera2018-02-19
| | | | | | | | | | | | | | | If we restrict unique constraints on partitioned tables so that they must always include the partition key, then our standard approach to unique indexes already works --- each unique key is forced to exist within a single partition, so enforcing the unique restriction in each index individually is enough to have it enforced globally. Therefore we can implement unique indexes on partitions by simply removing a few restrictions (and adding others.) Discussion: https://postgr.es/m/20171222212921.hi6hg6pem2w2t36z@alvherre.pgsql Discussion: https://postgr.es/m/20171229230607.3iib6b62fn3uaf47@alvherre.pgsql Reviewed-by: Simon Riggs, Jesper Pedersen, Peter Eisentraut, Jaime Casanova, Amit Langote
* Remove bogus "extern" annotations on function definitions.Tom Lane2018-02-19
| | | | | | | | | While this is not illegal C, project style is to put "extern" only on declarations not definitions. David Rowley Discussion: https://postgr.es/m/CAKJS1f9RKLWXcMBQhvDYhmsMEo+ALuNgA-NE+AX5Uoke9DJ2Xg@mail.gmail.com
* Remove redundant initialization of a local variable.Tom Lane2018-02-18
| | | | | | | In what was doubtless a typo, commit bf6c614a2 introduced a duplicate initialization of a local variable. This made Coverity unhappy, as well as pretty much anybody reading the code. We don't even have a real use for the local variable, so just remove it.
* Fix StaticAssertExpr() under C++Peter Eisentraut2018-02-18
| | | | | The previous code didn't compile, because static_assert() must end with a semicolon. To fix, wrap it in a block, similar to the C code.
* Remove redundant function declarationPeter Eisentraut2018-02-18
|