aboutsummaryrefslogtreecommitdiff
path: root/contrib
Commit message (Collapse)AuthorAge
* Correct logical decoding restore behaviour for subtransactions.Andres Freund2016-10-03
| | | | | | | | | | | | | | | | | | | Before initializing iteration over a subtransaction's changes, the last few changes were not spilled to disk. That's correct if the transaction didn't spill to disk, but otherwise... This bug can lead to missed or misorderd subtransaction contents when they were spilled to disk. Move spilling of the remaining in-memory changes to ReorderBufferIterTXNInit(), where it can easily be applied to the top transaction and, if present, subtransactions. Since this code had too many bugs already, noticeably increase test coverage. Fixes: #14319 Reported-By: Huan Ruan Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org> Backport: 9,4-, where logical decoding was added
* Fix building with LibreSSL.Heikki Linnakangas2016-09-15
| | | | | | | | | | | | | | | | LibreSSL defines OPENSSL_VERSION_NUMBER to claim that it is version 2.0.0, but it doesn't have the functions added in OpenSSL 1.1.0. Add autoconf checks for the individual functions we need, and stop relying on OPENSSL_VERSION_NUMBER. Backport to 9.5 and 9.6, like the patch that broke this. In the back-branches, there are still a few OPENSSL_VERSION_NUMBER checks left, to check for OpenSSL 0.9.8 or 0.9.7. I left them as they were - LibreSSL has all those functions, so they work as intended. Per buildfarm member curculio. Discussion: <2442.1473957669@sss.pgh.pa.us>
* pg_buffercache: Allow huge allocations.Robert Haas2016-09-15
| | | | | | | | | | | | Otherwise, users who have configured shared_buffers >= 256GB won't be able to use this module. There probably aren't many of those, but it doesn't hurt anything to fix it so that it works. Backpatch to 9.4, where MemoryContextAllocHuge was introduced. The same problem exists in older branches, but there's no easy way to fix it there. KaiGai Kohei
* Support OpenSSL 1.1.0.Heikki Linnakangas2016-09-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes needed to build at all: - Check for SSL_new in configure, now that SSL_library_init is a macro. - Do not access struct members directly. This includes some new code in pgcrypto, to use the resource owner mechanism to ensure that we don't leak OpenSSL handles, now that we can't embed them in other structs anymore. - RAND_SSLeay() -> RAND_OpenSSL() Changes that were needed to silence deprecation warnings, but were not strictly necessary: - RAND_pseudo_bytes() -> RAND_bytes(). - SSL_library_init() and OpenSSL_config() -> OPENSSL_init_ssl() - ASN1_STRING_data() -> ASN1_STRING_get0_data() - DH_generate_parameters() -> DH_generate_parameters() - Locking callbacks are not needed with OpenSSL 1.1.0 anymore. (Good riddance!) Also change references to SSLEAY_VERSION_NUMBER with OPENSSL_VERSION_NUMBER, for the sake of consistency. OPENSSL_VERSION_NUMBER has existed since time immemorial. Fix SSL test suite to work with OpenSSL 1.1.0. CA certificates must have the "CA:true" basic constraint extension now, or OpenSSL will refuse them. Regenerate the test certificates with that. The "openssl" binary, used to generate the certificates, is also now more picky, and throws an error if an X509 extension is specified in "req_extensions", but that section is empty. Backpatch to 9.5 and 9.6, per popular demand. The file structure was somewhat different in earlier branches, so I didn't bother to go further than that. In back-branches, we still support OpenSSL 0.9.7 and above. OpenSSL 0.9.6 should still work too, but I didn't test it. In master, we only support 0.9.8 and above. Patch by Andreas Karlsson, with additional changes by me. Discussion: <20160627151604.GD1051@msg.df7cb.de>
* Fix -e option in contrib/intarray/bench/bench.pl.Tom Lane2016-08-17
| | | | | | | | | | As implemented, -e ran an EXPLAIN but then discarded the output, which certainly seems pointless. Make it print to stdout instead. It's been like that forever, so back-patch to all supported branches. Daniel Gustafsson, reviewed by Andreas Scherbaum Patch: <B97BDCB7-A3B3-4734-90B5-EDD586941629@yesql.se>
* Fix typoPeter Eisentraut2016-08-09
|
* Don't propagate a null subtransaction snapshot up to parent transaction.Tom Lane2016-08-07
| | | | | | | | | This oversight could cause logical decoding to fail to decode an outer transaction containing changes, if a subtransaction had an XID but no actual changes. Per bug #14279 from Marko Tiikkaja. Patch by Marko based on analysis by Andrew Gierth. Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
* Fix assorted fallout from IS [NOT] NULL patch.Tom Lane2016-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits 4452000f3 et al established semantics for NullTest.argisrow that are a bit different from its initial conception: rather than being merely a cache of whether we've determined the input to have composite type, the flag now has the further meaning that we should apply field-by-field testing as per the standard's definition of IS [NOT] NULL. If argisrow is false and yet the input has composite type, the construct instead has the semantics of IS [NOT] DISTINCT FROM NULL. Update the comments in primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print such cases correctly. In the case of ruleutils.c, this merely results in cosmetic changes in EXPLAIN output, since the case can't currently arise in stored rules. However, it represents a live bug for deparse.c, which would formerly have sent a remote query that had semantics different from the local behavior. (From the user's standpoint, this means that testing a remote nested-composite column for null-ness could have had unexpected recursive behavior much like that fixed in 4452000f3.) In a related but somewhat independent fix, make plancat.c set argisrow to false in all NullTest expressions constructed to represent "attnotnull" constructs. Since attnotnull is actually enforced as a simple null-value check, this is a more accurate representation of the semantics; we were previously overpromising what it meant for composite columns, which might possibly lead to incorrect planner optimizations. (It seems that what the SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so arguably we are violating the spec and should fix attnotnull to do the other thing. If we ever do, this part should get reverted.) Back-patch, same as the previous commit. Discussion: <10682.1469566308@sss.pgh.pa.us>
* Make contrib regression tests safe for Danish locale.Tom Lane2016-07-21
| | | | | | | | | | In btree_gin and citext, avoid some not-particularly-interesting dependencies on the sorting of 'aa'. In tsearch2, use COLLATE "C" to remove an uninteresting dependency on locale sort order (and thereby allow removal of a variant expected-file). Also, in citext, avoid assuming that lower('I') = 'i'. This isn't relevant to Danish but it does fail in Turkish.
* Use correct symbol for minimum int64 valuePeter Eisentraut2016-07-17
| | | | | | | The old code used SEQ_MINVALUE to get the smallest int64 value. This was done as a convenience to avoid having to deal with INT64_IS_BUSTED, but that is obsolete now. Also, it is incorrect because the smallest int64 value is actually SEQ_MINVALUE-1. Fix by using PG_INT64_MIN.
* Fix handling of multixacts predating pg_upgradeAlvaro Herrera2016-06-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After pg_upgrade, it is possible that some tuples' Xmax have multixacts corresponding to the old installation; such multixacts cannot have running members anymore. In many code sites we already know not to read them and clobber them silently, but at least when VACUUM tries to freeze a multixact or determine whether one needs freezing, there's an attempt to resolve it to its member transactions by calling GetMultiXactIdMembers, and if the multixact value is "in the future" with regards to the current valid multixact range, an error like this is raised: ERROR: MultiXactId 123 has not been created yet -- apparent wraparound and vacuuming fails. Per discussion with Andrew Gierth, it is completely bogus to try to resolve multixacts coming from before a pg_upgrade, regardless of where they stand with regards to the current valid multixact range. It's possible to get from under this problem by doing SELECT FOR UPDATE of the problem tuples, but if tables are large, this is slow and tedious, so a more thorough solution is desirable. To fix, we realize that multixacts in xmax created in 9.2 and previous have a specific bit pattern that is never used in 9.3 and later (we already knew this, per comments and infomask tests sprinkled in various places, but we weren't leveraging this knowledge appropriately). Whenever the infomask of the tuple matches that bit pattern, we just ignore the multixact completely as if Xmax wasn't set; or, in the case of tuple freezing, we act as if an unwanted value is set and clobber it without decoding. This guarantees that no errors will be raised, and that the values will be progressively removed until all tables are clean. Most callers of GetMultiXactIdMembers are patched to recognize directly that the value is a removable "empty" multixact and avoid calling GetMultiXactIdMembers altogether. To avoid changing the signature of GetMultiXactIdMembers() in back branches, we keep the "allow_old" boolean flag but rename it to "from_pgupgrade"; if the flag is true, we always return an empty set instead of looking up the multixact. (I suppose we could remove the argument in the master branch, but I chose not to do so in this commit). This was broken all along, but the error-facing message appeared first because of commit 8e9a16ab8f7f and was partially fixed in a25c2b7c4db3. This fix, backpatched all the way back to 9.3, goes approximately in the same direction as a25c2b7c4db3 but should cover all cases. Bug analysis by Andrew Gierth and Álvaro Herrera. A number of public reports match this bug: https://www.postgresql.org/message-id/20140330040029.GY4582@tamriel.snowman.net https://www.postgresql.org/message-id/538F3D70.6080902@publicrelay.com https://www.postgresql.org/message-id/556439CF.7070109@pscs.co.uk https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com https://www.postgresql.org/message-id/20160615203829.5798.4594@wrigleys.postgresql.org
* Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.Tom Lane2016-06-05
| | | | | | | | | | | | | | | | | | Fix still another bug in commit 35fcb1b3d: it failed to fully initialize the SortSupport states it introduced to allow the executor to re-check ORDER BY expressions containing distance operators. That led to a null pointer dereference if the sortsupport code tried to use ssup_cxt. The problem only manifests in narrow cases, explaining the lack of previous field reports. It requires a GiST-indexable distance operator that lacks SortSupport and is on a pass-by-ref data type, which among core+contrib seems to be only btree_gist's interval opclass; and it requires the scan to be done as an IndexScan not an IndexOnlyScan, which explains how btree_gist's regression test didn't catch it. Per bug #14134 from Jihyun Yu. Peter Geoghegan Report: <20160511154904.2603.43889@wrigleys.postgresql.org>
* Ensure plan stability in contrib/btree_gist regression test.Tom Lane2016-05-12
| | | | | | | | Buildfarm member skink failed with symptoms suggesting that an auto-analyze had happened and changed the plan displayed for a test query. Although this is evidently of low probability, regression tests that sometimes fail are no fun, so add commands to force a bitmap scan to be chosen.
* Remove unused macros.Heikki Linnakangas2016-05-02
| | | | | | | | | CHECK_PAGE_OFFSET_RANGE() has been unused forever. CHECK_RELATION_BLOCK_RANGE() has been unused in pgstatindex.c ever since bt_page_stats() and bt_page_items() functions were moved from pgstattuple to pageinspect module. It still exists in pageinspect/btreefuncs.c. Daniel Gustafsson
* Revert "Convert contrib/seg's bool-returning SQL functions to V1 call ↵Tom Lane2016-04-28
| | | | | | | | convention." This reverts commit b1dd2f86ce7d43f23f6aae307bb22de826849e7d. That turns out to have been based on a faulty diagnosis of why the VS2015 build was misbehaving. Instead, we need to fix DatumGetBool().
* Convert contrib/seg's bool-returning SQL functions to V1 call convention.Tom Lane2016-04-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It appears that we can no longer get away with using V0 call convention for bool-returning functions in newer versions of MSVC. The compiler seems to generate code that doesn't clear the higher-order bits of the result register, causing the bool result Datum to often read as "true" when "false" was intended. This is not very surprising, since the function thinks it's returning a bool-width result but fmgr_oldstyle assumes that V0 functions return "char *"; what's surprising is that that hack worked for so long on so many platforms. The only functions of this description in core+contrib are in contrib/seg, which we'd intentionally left mostly in V0 style to serve as a warning canary if V0 call convention breaks. We could imagine hacking things so that they're still V0 (we'd have to redeclare the bool-returning functions as returning some suitably wide integer type, like size_t, at the C level). But on the whole it seems better to convert 'em to V1. We can still leave the pointer- and int-returning functions in V0 style, so that the test coverage isn't gone entirely. Back-patch to 9.5, since our intention is to support VS2015 in 9.5 and later. There's no SQL-level change in the functions' behavior so back-patching should be safe enough. Discussion: <22094.1461273324@sss.pgh.pa.us> Michael Paquier, adjusted some by me
* Fix core dump in ReorderBufferRestoreChange on alignment-picky platforms.Tom Lane2016-04-14
| | | | | | | | | | | | | | | | | | | | | | | | When re-reading an update involving both an old tuple and a new tuple from disk, reorderbuffer.c was careless about whether the new tuple is suitably aligned for direct access --- in general, it isn't. We'd missed seeing this in the buildfarm because the contrib/test_decoding tests exercise this code path only a few times, and by chance all of those cases have old tuples with length a multiple of 4, which is usually enough to make the access to the new tuple's t_len safe. For some still-not-entirely-clear reason, however, Debian's sparc build gets a bus error, as reported by Christoph Berg; perhaps it's assuming 8-byte alignment of the pointer? The lack of previous field reports is probably because you need all of these conditions to trigger a crash: an alignment-picky platform (not Intel), a transaction large enough to spill to disk, an update within that xact that changes a primary-key field and has an odd-length old tuple, and of course logical decoding tracing the transaction. Avoid the alignment assumption by using memcpy instead of fetching t_len directly, and add a test case that exposes the crash on picky platforms. Back-patch to 9.4 where the bug was introduced. Discussion: <20160413094117.GC21485@msg.credativ.de>
* Add missing checks to some of pageinspect's BRIN functionsAlvaro Herrera2016-03-28
| | | | | | | | | | | | | | brin_page_type() and brin_metapage_info() did not enforce being called by superuser, like other pageinspect functions that take bytea do. Since they don't verify the passed page thoroughly, it is possible to use them to read the server memory with a carefully crafted bytea value, up to a file kilobytes from where the input bytea is located. Have them throw errors if called by a non-superuser. Report and initial patch: Andreas Seltenreich Security: CVE-2016-3065
* Fix phony .PHONY.Tom Lane2016-03-19
| | | | A couple makefiles had misspelled the magic .PHONY target as PHONY.
* Avoid unlikely data-loss scenarios due to rename() without fsync.Andres Freund2016-03-09
| | | | | | | | | | | | | | | | | | | | | Renaming a file using rename(2) is not guaranteed to be durable in face of crashes. Use the previously added durable_rename()/durable_link_or_rename() in various places where we previously just renamed files. Most of the changed call sites are arguably not critical, but it seems better to err on the side of too much durability. The most prominent known case where the previously missing fsyncs could cause data loss is crashes at the end of a checkpoint. After the actual checkpoint has been performed, old WAL files are recycled. When they're filled, their contents are fdatasynced, but we did not fsync the containing directory. An OS/hardware crash in an unfortunate moment could then end up leaving that file with its old name, but new content; WAL replay would thus not replay it. Reported-By: Tomas Vondra Author: Michael Paquier, Tomas Vondra, Andres Freund Discussion: 56583BDD.9060302@2ndquadrant.com Backpatch: All supported branches
* ltree: Zero padding bytes when allocating memory for externally visible data.Andres Freund2016-03-08
| | | | | | | | | | ltree/ltree_gist/ltxtquery's headers stores data at MAXALIGN alignment, requiring some padding bytes. So far we left these uninitialized. Zero those by using palloc0. Author: Andres Freund Reported-By: Andres Freund / valgrind / buildarm animal skink Backpatch: 9.1-
* logical decoding: Fix handling of large old tuples with replica identity full.Andres Freund2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | When decoding the old version of an UPDATE or DELETE change, and if that tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or failed in more subtle ways in non-assert builds. Normally individual tuples aren't bigger than MaxHeapTupleSize, with big datums toasted. But that's not the case for the old version of a tuple for logical decoding; the replica identity is logged as one piece. With the default replica identity btree limits that to small tuples, but that's not the case for FULL. Change the tuple buffer infrastructure to separate allocate over-large tuples, instead of always going through the slab cache. This unfortunately requires changing the ReorderBufferTupleBuf definition, we need to store the allocated size someplace. To avoid requiring output plugins to recompile, don't store HeapTupleHeaderData directly after HeapTupleData, but point to it via t_data; that leaves rooms for the allocated size. As there's no reason for an output plugin to look at ReorderBufferTupleBuf->t_data.header, remove the field. It was just a minor convenience having it directly accessible. Reported-By: Adam Dratwiński Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
* logical decoding: old/newtuple in spooled UPDATE changes was switched around.Andres Freund2016-03-05
| | | | | | | | | | | | | | | Somehow I managed to flip the order of restoring old & new tuples when de-spooling a change in a large transaction from disk. This happens to only take effect when a change is spooled to disk which has old/new versions of the tuple. That only is the case for UPDATEs where he primary key changed or where replica identity is changed to FULL. The tests didn't catch this because either spooled updates, or updates that changed primary keys, were tested; not both at the same time. Found while adding tests for the following commit. Backpatch: 9.4, where logical decoding was added
* logical decoding: Tell reorderbuffer about all xids.Andres Freund2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | Logical decoding's reorderbuffer keeps transactions in an LSN ordered list for efficiency. To make that's efficiently possible upper-level xids are forced to be logged before nested subtransaction xids. That only works though if these records are all looked at: Unfortunately we didn't do so for e.g. row level locks, which are otherwise uninteresting for logical decoding. This could lead to errors like: "ERROR: subxact logged without previous toplevel record". It's not sufficient to just look at row locking records, the xid could appear first due to a lot of other types of records (which will trigger the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny). So invent infrastructure to tell reorderbuffer about xids seen, when they'd otherwise not pass through reorderbuffer.c. Reported-By: Jarred Ward Bug: #13844 Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org Backpatch: 9.4, where logical decoding was added
* Force synchronous_commit=on in test_decoding's concurrent_ddl_dml.spec.Andres Freund2016-03-03
| | | | | | | | Otherwise running installcheck-force on a server with synchronous_commit=off will result in the tests failing. All the other tests already do so... Backpatch: 9.4, where logical decoding was added
* logical decoding: fix decoding of a commit's commit time.Andres Freund2016-03-02
| | | | | | | | | | | | | | | | | | When adding replication origins in 5aa235042, I somehow managed to set the timestamp of decoded transactions to InvalidXLogRecptr when decoding one made without a replication origin. Fix that, and the wrong type of the new commit_time variable. This didn't trigger a regression test failure because we explicitly don't show commit timestamps in the regression tests, as they obviously are variable. Add a test that checks that a decoded commit's timestamp is within minutes of NOW() from before the commit. Reported-By: Weiping Qu Diagnosed-By: Artur Zakirov Discussion: 56D4197E.9050706@informatik.uni-kl.de, 56D42918.1010108@postgrespro.ru Backpatch: 9.5, where 5aa235042 originates.
* Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.Tom Lane2016-02-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dead or half-dead index leaf pages were incorrectly reported as live, as a consequence of a code rearrangement I made (during a moment of severe brain fade, evidently) in commit d287818eb514d431. The index metapage was not counted in index_size, causing that result to not agree with the actual index size on-disk. Index root pages were not counted in internal_pages, which is inconsistent compared to the case of a root that's also a leaf (one-page index), where the root would be counted in leaf_pages. Aside from that inconsistency, this could lead to additional transient discrepancies between the reported page counts and index_size, since it's possible for pgstatindex's scan to see zero or multiple pages marked as BTP_ROOT, if the root moves due to a split during the scan. With these fixes, index_size will always be exactly one page more than the sum of the displayed page counts. Also, the index_size result was incorrectly documented as being measured in pages; it's always been measured in bytes. (While fixing that, I couldn't resist doing some small additional wordsmithing on the pgstattuple docs.) Including the metapage causes the reported index_size to not be zero for an empty index. To preserve the desired property that the pgstattuple regression test results are platform-independent (ie, BLCKSZ configuration independent), scale the index_size result in the regression tests. The documentation issue was reported by Otsuka Kenji, and the inconsistent root page counting by Peter Geoghegan; the other problems noted by me. Back-patch to all supported branches, because this has been broken for a long time.
* postgres_fdw: Avoid possible misbehavior when RETURNING tableoid column only.Robert Haas2016-02-04
| | | | | | | | deparseReturningList ended up adding up RETURNING NULL to the code, but code elsewhere saw an empty list of attributes and concluded that it should not expect tuples from the remote side. Etsuro Fujita and Robert Haas, reviewed by Thom Brown
* Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.Tom Lane2016-02-03
| | | | | | | | | | | Commit e09996ff8dee3f70 was one brick shy of a load: it didn't insist that the detected JSON number be the whole of the supplied string. This allowed inputs such as "2016-01-01" to be misdetected as valid JSON numbers. Per bug #13906 from Dmitry Ryabov. In passing, be more wary of zero-length input (I'm not sure this can happen given current callers, but better safe than sorry), and do some minor cosmetic cleanup.
* Fix incorrect pattern-match processing in psql's \det command.Tom Lane2016-01-29
| | | | | | | | | | | | | | listForeignTables' invocation of processSQLNamePattern did not match up with the other ones that handle potentially-schema-qualified names; it failed to make use of pg_table_is_visible() and also passed the name arguments in the wrong order. Bug seems to have been aboriginal in commit 0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't inquire too closely into the behavior, although the silliness was later exposed by inconsistencies in the test queries added by 59efda3e50ca4de6 (which I probably should have questioned at the time, but didn't). Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane. Back-patch to all affected branches.
* Remove new coupling between NAMEDATALEN and MAX_LEVENSHTEIN_STRLEN.Tom Lane2016-01-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit e529cd4ffa605c6f introduced an Assert requiring NAMEDATALEN to be less than MAX_LEVENSHTEIN_STRLEN, which has been 255 for a long time. Since up to that instant we had always allowed NAMEDATALEN to be substantially more than that, this was ill-advised. It's debatable whether we need MAX_LEVENSHTEIN_STRLEN at all (versus putting a CHECK_FOR_INTERRUPTS into the loop), or whether it has to be so tight; but this patch takes the narrower approach of just not applying the MAX_LEVENSHTEIN_STRLEN limit to calls from the parser. Trusting the parser for this seems reasonable, first because the strings are limited to NAMEDATALEN which is unlikely to be hugely more than 256, and second because the maximum distance is tightly constrained by MAX_FUZZY_DISTANCE (though we'd forgotten to make use of that limit in one place). That means the cost is not really O(mn) but more like O(max(m,n)). Relaxing the limit for user-supplied calls is left for future research; given the lack of complaints to date, it doesn't seem very high priority. In passing, fix confusion between lengths-in-bytes and lengths-in-chars in comments and error messages. Per gripe from Kevin Day; solution suggested by Robert Haas. Back-patch to 9.5 where the unwanted restriction was introduced.
* Use LOAD not actual code execution to pull in plpython library.Tom Lane2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | Commit 866566a690bb9916 is insufficient to prevent dump/reload failures when using transform modules in a database with both plpython2 and plpython3 installed. The reason is that the transform extension scripts use DO blocks as a mechanism to pull in the libpython library before creating the transform function. It's necessary to preload the library because the dynamic loader won't do it for us on every platform, leading to "unresolved symbol" failures when the transform library is loaded. But it's *not* necessary to execute Python code, and doing so will provoke a multiple-Pythons-are-loaded error even after the preceding commit. To fix, use LOAD instead of a DO block. That requires superuser privilege, but creation of a C function does anyway. It also embeds knowledge of the underlying library name for each PL language; but that's wired into the initdb-time contents of pg_pltemplate too, so that doesn't seem like a large problem either. Note that CREATE TRANSFORM as such doesn't call the language module at all. Per a report from Paul Jones. Back-patch to 9.5 where transform modules were introduced.
* Sort $(wildcard) output where needed for reproducible build output.Tom Lane2016-01-05
| | | | | | | | The order of inclusion of .o files makes a difference in linker output; not a functional difference, but still a bitwise difference, which annoys some packagers who would like reproducible builds. Report and patch by Christoph Berg
* Add a comment noting that FDWs don't have to implement EXCEPT or LIMIT TO.Tom Lane2015-12-31
| | | | | | | | postgresImportForeignSchema pays attention to IMPORT's EXCEPT and LIMIT TO options, but only as an efficiency hack, not for correctness' sake. The FDW documentation does explain that, but someone using postgres_fdw.c as a coding guide might not remember it, so let's add a comment here. Per question from Regina Obe.
* Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()Alvaro Herrera2015-12-27
| | | | | | | | | | | Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1.
* Allow foreign and custom joins to handle EvalPlanQual rechecks.Robert Haas2015-12-08
| | | | | | | | | | | | | | | | | | | | | | | Commit e7cb7ee14555cc9c5773e2c102efd6371f6f2005 provided basic infrastructure for allowing a foreign data wrapper or custom scan provider to replace a join of one or more tables with a scan. However, this infrastructure failed to take into account the need for possible EvalPlanQual rechecks, and ExecScanFetch would fail an assertion (or just overwrite memory) if such a check was attempted for a plan containing a pushed-down join. To fix, adjust the EPQ machinery to skip some processing steps when scanrelid == 0, making those the responsibility of scan's recheck method, which also has the responsibility in this case of correctly populating the relevant slot. To allow foreign scans to gain control in the right place to make use of this new facility, add a new, optional RecheckForeignScan method. Also, allow a foreign scan to have a child plan, which can be used to correctly populate the slot (or perhaps for something else, but this is the only use currently envisioned). KaiGai Kohei, reviewed by Robert Haas, Etsuro Fujita, and Kyotaro Horiguchi.
* Dodge a macro-name conflict with Perl.Tom Lane2015-11-19
| | | | | | | | | | | | | Some versions of Perl export a macro named HS_KEY. This creates a conflict in contrib/hstore_plperl against hstore's macro of the same name. The most future-proof solution seems to be to rename our macro; I chose HSTORE_KEY. For consistency, rename HS_VAL and related macros similarly. Back-patch to 9.5. contrib/hstore_plperl doesn't exist before that so there is no need to worry about the conflict in older releases. Per reports from Marco Atzeri and Mike Blackwell.
* Message style improvementsPeter Eisentraut2015-10-28
| | | | | Message style, plurals, quoting, spelling, consistency with similar messages
* Allow FDWs to push down quals without breaking EvalPlanQual rechecks.Robert Haas2015-10-15
| | | | | | | | | | | | | | | | | This fixes a long-standing bug which was discovered while investigating the interaction between the new join pushdown code and the EvalPlanQual machinery: if a ForeignScan appears on the inner side of a paramaterized nestloop, an EPQ recheck would re-return the original tuple even if it no longer satisfied the pushed-down quals due to changed parameter values. This fix adds a new member to ForeignScan and ForeignScanState and a new argument to make_foreignscan, and requires changes to FDWs which push down quals to populate that new argument with a list of quals they have chosen to push down. Therefore, I'm only back-patching to 9.5, even though the bug is not new in 9.5. Etsuro Fujita, reviewed by me and by Kyotaro Horiguchi.
* Prevent stack overflow in query-type functions.Noah Misch2015-10-05
| | | | | | The tsquery, ltxtquery and query_int data types have a common ancestor. Having acquired check_stack_depth() calls independently, each was missing at least one call. Back-patch to 9.0 (all supported versions).
* pgcrypto: Detect and report too-short crypt() salts.Noah Misch2015-10-05
| | | | | | | | | | Certain short salts crashed the backend or disclosed a few bytes of backend memory. For existing salt-induced error conditions, emit a message saying as much. Back-patch to 9.0 (all supported versions). Josh Kupershmidt Security: CVE-2015-5288
* Improve contrib/pg_stat_statements' handling of garbage collection failure.Tom Lane2015-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we can't read the query texts file (whether because out-of-memory, or for some other reason), give up and reset the file to empty, discarding all stored query texts, though not the statistics per se. We used to leave things alone and hope for better luck next time, but the problem is that the file is only going to get bigger and even harder to slurp into memory. Better to do something that will get us out of trouble. Likewise reset the file to empty for any other failure within gc_qtexts(). The previous behavior after a write error was to discard query texts but not do anything to truncate the file, which is just weird. Also, increase the maximum supported file size from MaxAllocSize to MaxAllocHugeSize; this makes it more likely we'll be able to do a garbage collection successfully. Also, fix recalculation of mean_query_len within entry_dealloc() to match the calculation in gc_qtexts(). The previous coding overlooked the possibility of dropped texts (query_len == -1) and would underestimate the mean of the remaining entries in such cases, thus possibly causing excess garbage collection cycles. In passing, add some errdetail to the log entry that complains about insufficient memory to read the query texts file, which after all was Jim Nasby's original complaint. Back-patch to 9.4 where the current handling of query texts was introduced. Peter Geoghegan, rather editorialized upon by me
* Improve errhint() about replication slot naming restrictions.Andres Freund2015-10-03
| | | | | | | | | | The existing hint talked about "may only contain letters", but the actual requirement is more strict: only lower case letters are allowed. Reported-By: Rushabh Lathia Author: Rushabh Lathia Discussion: AGPqQf2x50qcwbYOBKzb4x75sO_V3g81ZsA8+Ji9iN5t_khFhQ@mail.gmail.com Backpatch: 9.4-, where replication slots were added
* Improve handling of collations in contrib/postgres_fdw.Tom Lane2015-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have a local Var of say varchar type with default collation, and we apply a RelabelType to convert that to text with default collation, we don't want to consider that as creating an FDW_COLLATE_UNSAFE situation. It should be okay to compare that to a remote Var, so long as the remote Var determines the comparison collation. (When we actually ship such an expression to the remote side, the local Var would become a Param with default collation, meaning the remote Var would in fact control the comparison collation, because non-default implicit collation overrides default implicit collation in parse_collate.c.) To fix, be more precise about what FDW_COLLATE_NONE means: it applies either to a noncollatable data type or to a collatable type with default collation, if that collation can't be traced to a remote Var. (When it can, FDW_COLLATE_SAFE is appropriate.) We were essentially using that interpretation already at the Var/Const/Param level, but we weren't bubbling it up properly. An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT value to describe the second situation, but that would add more code without changing the actual behavior, so it didn't seem worthwhile. Also, since we're clarifying the rule to be that we care about whether operator/function input collations match, there seems no need to fail immediately upon seeing a Const/Param/non-foreign-Var with nondefault collation. We only have to reject if it appears in a collation-sensitive context (for example, "var IS NOT NULL" is perfectly safe from a collation standpoint, whatever collation the var has). So just set the state to UNSAFE rather than failing immediately. Per report from Jeevan Chalke. This essentially corrects some sloppy thinking in commit ed3ddf918b59545583a4b374566bc1148e75f593, so back-patch to 9.3 where that logic appeared.
* test_decoding: Protect against rare spurious test failures.Andres Freund2015-09-22
| | | | | | | | | | A bunch of tests missed specifying that empty transactions shouldn't be displayed. That causes problems when e.g. autovacuum runs in an unfortunate moment. The tests in question only run for a very short time, making this quite unlikely. Reported-By: Buildfarm member axolotl Backpatch: 9.4, where logical decoding was introduced
* Fix error message wording in previous sslinfo commitAlvaro Herrera2015-09-08
|
* Add more sanity checks in contrib/sslinfoAlvaro Herrera2015-09-07
| | | | | | | | | We were missing a few return checks on OpenSSL calls. Should be pretty harmless, since we haven't seen any user reports about problems, and this is not a high-traffic module anyway; still, a bug is a bug, so backpatch this all the way back to 9.0. Author: Michael Paquier, while reviewing another sslinfo patch
* Fix misc typos.Heikki Linnakangas2015-09-05
| | | | Oskari Saarenmaa. Backpatch to stable branches where applicable.
* Fix sepgsql regression tests.Joe Conway2015-08-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The regression tests for sepgsql were broken by changes in the base distro as-shipped policies. Specifically, definition of unconfined_t in the system default policy was changed to bypass multi-category rules, which the regression test depended on. Fix that by defining a custom privileged domain (sepgsql_regtest_superuser_t) and using it instead of system's unconfined_t domain. The new sepgsql_regtest_superuser_t domain performs almost like the current unconfined_t, but restricted by multi-category policy as the traditional unconfined_t was. The custom policy module is a self defined domain, and so should not be affected by related future system policy changes. However, it still uses the unconfined_u:unconfined_r pair for selinux-user and role. Those definitions have not been changed for several years and seem less risky to rely on than the unconfined_t domain. Additionally, if we define custom user/role, they would need to be manually defined at the operating system level, adding more complexity to an already non-standard and complex regression test. Back-patch to 9.3. The regression tests will need more work before working correctly on 9.2. Starting with 9.2, sepgsql has had dependencies on libselinux versions that are only available on newer distros with the changed set of policies (e.g. RHEL 7.x). On 9.1 sepgsql works fine with the older distros with original policy set (e.g. RHEL 6.x), and on which the existing regression tests work fine. We might want eventually change 9.1 sepgsql regression tests to be more independent from the underlying OS policies, however more work will be needed to make that happen and it is not clear that it is worth the effort. Kohei KaiGai with review by Adam Brightwell and me, commentary by Stephen, Alvaro, Tom, Robert, and others.
* Improve spellingPeter Eisentraut2015-08-22
|