aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Stamp 9.1.14.REL9_1_14Tom Lane2014-07-21
|
* Translation updatesPeter Eisentraut2014-07-21
|
* Update time zone data files to tzdata release 2014e.Tom Lane2014-07-19
| | | | | DST law changes in Crimea, Egypt, Morocco. New zone Antarctica/Troll for Norwegian base in Queen Maud Land.
* Fix two low-probability memory leaks in regular expression parsing.Tom Lane2014-07-18
| | | | | | | | | | | | | | | | | If pg_regcomp failed after having invoked markst/cleanst, it would leak any "struct subre" nodes it had created. (We've already detected all regex syntax errors at that point, so the only likely causes of later failure would be query cancel or out-of-memory.) To fix, make sure freesrnode knows the difference between the pre-cleanst and post-cleanst cleanup procedures. Add some documentation of this less-than-obvious point. Also, newlacon did the wrong thing with an out-of-memory failure from realloc(), so that the previously allocated array would be leaked. Both of these are pretty low-probability scenarios, but a bug is a bug, so patch all the way back. Per bug #10976 from Arthur O'Dwyer.
* Fix REASSIGN OWNED for text search objectsAlvaro Herrera2014-07-15
| | | | | | | | | | | | | | | | | | | | | Trying to reassign objects owned by a user that had text search dictionaries or configurations used to fail with: ERROR: unexpected classid 3600 or ERROR: unexpected classid 3602 Fix by adding cases for those object types in a switch in pg_shdepend.c. Both REASSIGN OWNED and text search objects go back all the way to 8.1, so backpatch to all supported branches. In 9.3 the alter-owner code was made generic, so the required change in recent branches is pretty simple; however, for 9.2 and older ones we need some additional reshuffling to enable specifying objects by OID rather than name. Text search templates and parsers are not owned objects, so there's no change required for them. Per bug #9749 reported by Michal Novotný
* Reset master xmin when hot_standby_feedback disabled.Simon Riggs2014-07-15
| | | | | | If walsender has xmin of standby then ensure we reset the value to 0 when we change from hot_standby_feedback=on to hot_standby_feedback=off.
* Add autocompletion of locale keywords for CREATE DATABASEMagnus Hagander2014-07-12
| | | | | Adds support for autocomplete of LC_COLLATE and LC_CTYPE to the CREATE DATABASE command in psql.
* Fix bug with whole-row references to append subplans.Tom Lane2014-07-11
| | | | | | | | | | | | | | ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source TupleTableSlot just once per query. But if the input is coming from an Append (or, perhaps, other cases?) more than one slot might be returned over the query run. This led to "record type has not been registered" errors when a composite datum was extracted from a non-blessed slot. This bug has been there a long time; I guess it escaped notice because when dealing with subqueries the planner tends to expand whole-row Vars into RowExprs, which don't have the same problem. It is possible to trigger the problem in all active branches, though, as illustrated by the added regression test.
* Don't assume a subquery's output is unique if there's a SRF in its tlist.Tom Lane2014-07-08
| | | | | | | | | | | | | While the x output of "select x from t group by x" can be presumed unique, this does not hold for "select x, generate_series(1,10) from t group by x", because we may expand the set-returning function after the grouping step. (Perhaps that should be re-thought; but considering all the other oddities involved with SRFs in targetlists, it seems unlikely we'll change it.) Put a check in query_is_distinct_for() so it's not fooled by such cases. Back-patch to all supported branches. David Rowley
* Add some errdetail to checkRuleResultList().Tom Lane2014-07-02
| | | | | | | | | | | | | | | | | | | This function wasn't originally thought to be really user-facing, because converting a table to a view isn't something we expect people to do manually. So not all that much effort was spent on the error messages; in particular, while the code will complain that you got the column types wrong it won't say exactly what they are. But since we repurposed the code to also check compatibility of rule RETURNING lists, it's definitely user-facing. It now seems worthwhile to add errdetail messages showing exactly what the conflict is when there's a mismatch of column names or types. This is prompted by bug #10836 from Matthias Raffelsieper, which might have been forestalled if the error message had reported the wrong column type as being "record". Per Alvaro's advice, back-patch to branches before 9.4, but resist the temptation to rephrase any existing strings there. Adding new strings is not really a translation degradation; anyway having the info presented in English is better than not having it at all.
* Back-patch "Fix EquivalenceClass processing for nested append relations".Tom Lane2014-06-26
| | | | | | | | | | | | | | | | When we committed a87c729153e372f3731689a7be007bc2b53f1410, we somehow failed to notice that it didn't merely improve plan quality for expression indexes; there were very closely related cases that failed outright with "could not find pathkey item to sort". The failing cases seem to be those where the planner was already capable of selecting a MergeAppend plan, and there was inheritance involved: the lack of appropriate eclass child members would prevent prepare_sort_from_pathkeys() from succeeding on the MergeAppend's child plan nodes for inheritance child tables. Accordingly, back-patch into 9.1 through 9.3, along with an extra regression test case covering the problem. Per trouble report from Michael Glaesemann.
* Don't allow foreign tables with OIDs.Heikki Linnakangas2014-06-24
| | | | | | | | | | | | | | The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it was still possible with default_with_oids=true. But the rest of the system, including pg_dump, isn't prepared to handle foreign tables with OIDs properly. Backpatch down to 9.1, where foreign tables were introduced. It's possible that there are databases out there that already have foreign tables with OIDs. There isn't much we can do about that, but at least we can prevent them from being created in the future. Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
* Avoid leaking memory while evaluating arguments for a table function.Tom Lane2014-06-19
| | | | | | | | | | | | | ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM in the query-lifespan memory context. This is insignificant in simple cases where the function relation is scanned only once; but if the function is in a sub-SELECT or is on the inside of a nested loop, any memory consumed during argument evaluation can add up quickly. (The potential for trouble here had been foreseen long ago, per existing comments; but we'd not previously seen a complaint from the field about it.) To fix, create an additional temporary context just for this purpose. Per an example from MauMau. Back-patch to all active branches.
* Make pqsignal() available to pg_regress of ECPG and isolation suites.Noah Misch2014-06-14
| | | | | | | Commit 453a5d91d49e4d35054f92785d830df4067e10c1 made it available to the src/test/regress build of pg_regress, but all pg_regress builds need the same treatment. Patch 9.2 through 8.4; in 9.3 and later, pg_regress gets pqsignal() via libpgport.
* Secure Unix-domain sockets of "make check" temporary clusters.Noah Misch2014-06-14
| | | | | | | | | | | | | | | | | Any OS user able to access the socket can connect as the bootstrap superuser and proceed to execute arbitrary code as the OS user running the test. Protect against that by placing the socket in a temporary, mode-0700 subdirectory of /tmp. The pg_regress-based test suites and the pg_upgrade test suite were vulnerable; the $(prove_check)-based test suites were already secure. Back-patch to 8.4 (all supported versions). The hazard remains wherever the temporary cluster accepts TCP connections, notably on Windows. As a convenient side effect, this lets testing proceed smoothly in builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values like /var/run/postgresql are often unwritable to the build user. Security: CVE-2014-0067
* Add mkdtemp() to libpgport.Noah Misch2014-06-14
| | | | | | This function is pervasive on free software operating systems; import NetBSD's implementation. Back-patch to 8.4, like the commit that will harness it.
* Fix pg_restore's processing of old-style BLOB COMMENTS data.Tom Lane2014-06-12
| | | | | | | | | | | | | | | Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch of COMMENT commands into a single BLOB COMMENTS archive object. With sufficiently many such comments, some of the commands would likely get split across bufferloads when restoring, causing failures in direct-to-database restores (though no problem would be evident in text output). This is the same type of issue we have with table data dumped as INSERT commands, and it can be fixed in the same way, by using a mini SQL lexer to figure out where the command boundaries are. Fortunately, the COMMENT commands are no more complex to lex than INSERTs, so we can just re-use the existing lexer for INSERTs. Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
* Remove inadvertent copyright violation in largeobject regression test.Tom Lane2014-06-12
| | | | | | Robert Frost is no longer with us, but his copyrights still are, so let's stop using "Stopping by Woods on a Snowy Evening" as test data before somebody decides to sue us. Wordsworth is more safely dead.
* Fix ancient encoding error in hungarian.stop.Tom Lane2014-06-10
| | | | | | | | | | | | | | | | | When we grabbed this file off the Snowball project's website, we mistakenly supposed that it was in LATIN1 encoding, but evidently it was actually in LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too, but there aren't any of those in the file. Other characters used in the file have the same codes in LATIN1 and LATIN2, which no doubt helped hide the problem for so long. The error is not only ours: the Snowball project also was confused about which encoding is required for Hungarian. But dealing with that will require source-code changes that I'm not at all sure we'll wish to back-patch. Fixing the stopword file seems reasonably safe to back-patch however.
* Fix breakages of hot standby regression test.Fujii Masao2014-06-06
| | | | | | | | | | | | | | This commit changes HS regression test so that it uses REPEATABLE READ transaction instead of SERIALIZABLE one because SERIALIZABLE transaction isolation level is not available in HS. Also this commit fixes VACUUM/ANALYZE label mixup. This was fixed in HEAD (commit 2985e16), but it should have been back-patched to 9.1 which had introduced SSI and forbidden SERIALIZABLE transaction in HS. Amit Langote
* Add defenses against running with a wrong selection of LOBLKSIZE.Tom Lane2014-06-05
| | | | | | | | | | | | | | | | | | | | | It's critical that the backend's idea of LOBLKSIZE match the way data has actually been divided up in pg_largeobject. While we don't provide any direct way to adjust that value, doing so is a one-line source code change and various people have expressed interest recently in changing it. So, just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in pg_control and cross-check that the backend's compiled-in setting matches the on-disk data. Also tweak the code in inv_api.c so that fetches from pg_largeobject explicitly verify that the length of the data field is not more than LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection at all in production builds. In some of the call sites an overlength data value would translate directly to a security-relevant stack clobber, so it seems worth one extra runtime comparison to be sure. In the back branches, we can't change the contents of pg_control; but we can still make the extra checks in inv_api.c, which will offer some amount of protection against running with the wrong value of LOBLKSIZE.
* Fix longstanding bug in HeapTupleSatisfiesVacuum().Andres Freund2014-06-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | HeapTupleSatisfiesVacuum() didn't properly discern between DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been inserted in the current transaction and deleted in a aborted subtransaction of the current backend. At the very least that caused problems for CLUSTER and CREATE INDEX in transactions that had aborting subtransactions producing rows, leading to warnings like: WARNING: concurrent delete in progress within table "..." possibly in an endless, uninterruptible, loop. Instead of treating *InProgress xmins the same as *IsCurrent ones, treat them as being distinct like the other visibility routines. As implemented this separatation can cause a behaviour change for rows that have been inserted and deleted in another, still running, transaction. HTSV will now return INSERT_IN_PROGRESS instead of DELETE_IN_PROGRESS for those. That's both, more in line with the other visibility routines and arguably more correct. The latter because a INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of xmax. The only current caller where that's possibly worse than the old behaviour is heap_prune_chain() which now won't mark the page as prunable if a row has concurrently been inserted and deleted. That's harmless enough. As a cautionary measure also insert a interrupt check before the gotos in IndexBuildHeapScan() that lead to the uninterruptible loop. There are other possible causes, like a row that several sessions try to update and all fail, for repeated loops and the cost of doing so in the retry case is low. As this bug goes back all the way to the introduction of subtransactions in 573a71a5da backpatch to all supported releases. Reported-By: Sandro Santilli
* Make plpython_unicode regression test work in more database encodings.Tom Lane2014-06-03
| | | | | | | | | | | | | | | This test previously used a data value containing U+0080, and would therefore fail if the database encoding didn't have an equivalent to that; which only about half of our supported server encodings do. We could fall back to using some plain-ASCII character, but that seems like it's losing most of the point of the test. Instead switch to using U+00A0 (no-break space), which translates into all our supported encodings except the four in the EUC_xx family. Per buildfarm testing. Back-patch to 9.1, which is as far back as this test is expected to succeed everywhere. (9.0 has the test, but without back-patching some 9.1 code changes we could not expect to get consistent results across platforms anyway.)
* Set the process latch when processing recovery conflict interrupts.Andres Freund2014-06-03
| | | | | | | | | | | | | | | | | | | | | | | | Because RecoveryConflictInterrupt() didn't set the process latch anything using the latter to wait for events didn't get notified about recovery conflicts. Most latch users are never the target of recovery conflicts, which explains the lack of reports about this until now. Since 9.3 two possible affected users exist though: The sql callable pg_sleep() now uses latches to wait and background workers are expected to use latches in their main loop. Both would currently wait until the end of WaitLatch's timeout. Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also be possible to fix the issue by having each latch user set set_latch_on_sigusr1. That seems failure prone and though, as most of these callsites won't often receive recovery conflicts and thus will likely only be tested against normal query cancels et al. It'd also be unnecessarily verbose. Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be sufficient, because that's where pg_sleep() was converted to waiting on the latch and background workers got introduced; but there could be user level code making use of the latch pre 9.3.
* On OS X, link libpython normally, ignoring the "framework" framework.Tom Lane2014-05-30
| | | | | | | | | | | | | | | | | | As of Xcode 5.0, Apple isn't including the Python framework as part of the SDK-level files, which means that linking to it might fail depending on whether Xcode thinks you've selected a specific SDK version. According to their Tech Note 2328, they've basically deprecated the framework method of linking to libpython and are telling people to link to the shared library normally. (I'm pretty sure this is in direct contradiction to the advice they were giving a few years ago, but whatever.) Testing says that this approach works fine at least as far back as OS X 10.4.11, so let's just rip out the framework special case entirely. We do still need a special case to decide that OS X provides a shared library at all, unfortunately (I wonder why the distutils check doesn't work ...). But this is still less of a special case than before, so it's fine. Back-patch to all supported branches, since we'll doubtless be hearing about this more as more people update to recent Xcode.
* Revert "Fix bogus %name-prefix option syntax in all our Bison files."Tom Lane2014-05-28
| | | | | | | | | | | | This reverts commit 4c5fde4e288983f30dae09a7eea8e6a9e6145477. It turns out that the %name-prefix syntax without "=" does not work at all in pre-2.4 Bison. We are not prepared to make such a large jump in minimum required Bison version just to suppress a warning message in a version hardly any developers are using yet. When 3.0 gets more popular, we'll figure out a way to deal with this. In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for anyone using 3.0 who doesn't want to see the warning.
* Fix bogus %name-prefix option syntax in all our Bison files.Tom Lane2014-05-28
| | | | | | | | | | | | %name-prefix doesn't use an "=" sign according to the Bison docs, but it silently accepted one anyway, until Bison 3.0. This was originally a typo of mine in commit 012abebab1bc72043f3f670bf32e91ae4ee04bd2, and we seem to have slavishly copied the error into all the other grammar files. Per report from Vik Fearing; analysis by Peter Eisentraut. Back-patch to all active branches, since somebody might try to build a back branch with up-to-date tools.
* Ensure cleanup in case of early errors in streaming base backupsMagnus Hagander2014-05-28
| | | | | | | | | | Move the code that sends the initial status information as well as the calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code failed, we would "leak" a counter of number of concurrent backups, thereby making the system always believe it was in backup mode. This could happen if the sending failed (which it probably never did given that the small amount of data to send would never cause a flush). It is very low risk, but all operations after do_pg_start_backup should be protected.
* Use 0-based numbering in comments about backup blocks.Heikki Linnakangas2014-05-19
| | | | | | | | | The macros and functions that work with backup blocks in the redo function use 0-based numbering, so let's use that consistently in the function that generates the records too. Makes it so much easier to compare the generation and replay functions. Backpatch to 9.0, where we switched from 1-based to 0-based numbering.
* Initialize tsId and dbId fields in WAL record of COMMIT PREPARED.Heikki Linnakangas2014-05-16
| | | | | | | | | | | | | | | | Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed that prepared transaction commits reuse that struct. Fix that. Because those fields were left unitialized, replaying a commit prepared WAL record in a hot standby node would fail to remove the relcache init file. That can lead to "could not open file" errors on the standby. Relcache init file only needs to be removed when a system table/index is rewritten in the transaction using two phase commit, so that should be rare in practice. In HEAD, the incorrect dbId/tsId values are also used for filtering in logical replication code, causing the transaction to always be filtered out. Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was introduced.
* Fix unportable setvbuf() usage in initdb.Tom Lane2014-05-15
| | | | | | | | | | | | In yesterday's commit 2dc4f011fd61501cce507be78c39a2677690d44b, I tried to force buffering of stdout/stderr in initdb to be what it is by default when the program is run interactively on Unix (since that's how most manual testing is done). This tripped over the fact that Windows doesn't support _IOLBF mode. We dealt with that a long time ago in syslogger.c by falling back to unbuffered mode on Windows. Export that solution in port.h and use it in initdb. Back-patch to 8.4, like the previous commit.
* Handle duplicate XIDs in txid_snapshot.Heikki Linnakangas2014-05-15
| | | | | | | | | | The proc array can contain duplicate XIDs, when a transaction is just being prepared for two-phase commit. To cope, remove any duplicates in txid_current_snapshot(). Also ignore duplicates in the input functions, so that if e.g. you have an old pg_dump file that already contains duplicates, it will be accepted. Report and fix by Jan Wieck. Backpatch to all supported versions.
* Fix race condition in preparing a transaction for two-phase commit.Heikki Linnakangas2014-05-15
| | | | | | | | | | | | | | | | | | | | To lock a prepared transaction's shared memory entry, we used to mark it with the XID of the backend. When the XID was no longer active according to the proc array, the entry was implicitly considered as not locked anymore. However, when preparing a transaction, the backend's proc array entry was cleared before transfering the locks (and some other state) to the prepared transaction's dummy PGPROC entry, so there was a window where another backend could finish the transaction before it was in fact fully prepared. To fix, rewrite the locking mechanism of global transaction entries. Instead of an XID, just have simple locked-or-not flag in each entry (we store the locking backend's backend id rather than a simple boolean, but that's just for debugging purposes). The backend is responsible for explicitly unlocking the entry, and to make sure that that happens, install a callback to unlock it on abort or process exit. Backpatch to all supported versions.
* In initdb, ensure stdout/stderr buffering behavior is what we expect.Tom Lane2014-05-14
| | | | | | | | | | | | | | | Since this program may print to either stdout or stderr, the relative ordering of its messages depends on the buffering behavior of those files. Force stdout to be line-buffered and stderr to be unbuffered, ensuring that the behavior will match standard Unix interactive behavior, even when stdout and stderr are rerouted to a file. Per complaint from Tomas Vondra. The particular case he pointed out is new in HEAD, but issues of the same sort could arise in any branch with other error messages, so back-patch to all branches. I'm unsure whether we might not want to do this in other client programs as well. For the moment, just fix initdb.
* Ignore config.pl and buildenv.pl in src/tools/msvc.Tom Lane2014-05-12
| | | | | | | | | config.pl and buildenv.pl can be used to customize build settings when using MSVC. They should never get committed into the common source tree. Back-patch to 9.0; it looks like the rules were different in 8.4. Michael Paquier
* Un-break ecpg test suite under --disable-integer-datetimes.Noah Misch2014-05-08
| | | | | | | Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b broke it. The change in sub-second precision at extreme dates is normal. The inconsistent truncation vs. rounding is essentially a bug, albeit a longstanding one. Back-patch to 8.4, like the causative commit.
* Protect against torn pages when deleting GIN list pages.Heikki Linnakangas2014-05-08
| | | | | | | | | To-be-deleted list pages contain no useful information, as they are being deleted, but we must still protect the writes from being torn by a crash after a partial write. To do that, re-initialize the pages on WAL replay. Jeff Janes caught this with a test program to test partial writes. Backpatch to all supported versions.
* Include files copied from libpqport in .gitignoreHeikki Linnakangas2014-05-08
| | | | Michael Paquier
* Avoid buffer bloat in libpq when server is consistently faster than client.Tom Lane2014-05-07
| | | | | | | | | | | | | | | | | | | | | | | | | If the server sends a long stream of data, and the server + network are consistently fast enough to force the recv() loop in pqReadData() to iterate until libpq's input buffer is full, then upon processing the last incomplete message in each bufferload we'd usually double the buffer size, due to supposing that we didn't have enough room in the buffer to finish collecting that message. After filling the newly-enlarged buffer, the cycle repeats, eventually resulting in an out-of-memory situation (which would be reported misleadingly as "lost synchronization with server"). Of course, we should not enlarge the buffer unless we still need room after discarding already-processed messages. This bug dates back quite a long time: pqParseInput3 has had the behavior since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad in 2008. Probably the reason it's not been isolated before is that in common environments the recv() loop would always be faster than the server (if on the same machine) or faster than the network (if not); or at least it wouldn't be slower consistently enough to let the buffer ramp up to a problematic size. The reported cases involve Windows, which perhaps has different timing behavior than other platforms. Per bug #7914 from Shin-ichi Morita, though this is different from his proposed solution. Back-patch to all supported branches.
* Fix failure to set ActiveSnapshot while rewinding a cursor.Tom Lane2014-05-07
| | | | | | | | | | | | | | ActiveSnapshot needs to be set when we call ExecutorRewind because some plan node types may execute user-defined functions during their ReScan calls (nodeLimit.c does so, at least). The wisdom of that is somewhat debatable, perhaps, but for now the simplest fix is to make sure the required context is valid. Failure to do this typically led to a null-pointer-dereference core dump, though it's possible that in more complex cases a function could be executed with the wrong snapshot leading to very subtle misbehavior. Per report from Leif Jensen. It's been broken for a long time, so back-patch to all active branches.
* Fix interval test, which was broken for floating-point timestamps.Jeff Davis2014-05-06
| | | | | | | | | | | | | | | | | | Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b introduced a test that couldn't be made consistent between integer and floating-point timestamps. It was designed to test the longest possible interval output length, so removing four zeros from the number of hours, as this patch does, is not ideal. But the test still has some utility for its original purpose, and there aren't a lot of other good options. Noah Misch suggested a different approach where we test that the output either matches what we expect from integer timestamps or what we expect from floating-point timestamps. That seemed to obscure an otherwise simple test, however. Reviewed by Tom Lane and Noah Misch.
* Remove tabs after spaces in C commentsBruce Momjian2014-05-06
| | | | | | | | | This was not changed in HEAD, but will be done later as part of a pgindent run. Future pgindent runs will also do this. Report by Tom Lane Backpatch through all supported branches, but not HEAD
* Fix use of free in walsender error handling after a sysid mismatch.Heikki Linnakangas2014-05-06
| | | | | | | Found via valgrind. The bug exists since the introduction of the walsender, so backpatch to 9.0. Andres Freund
* Fix handling of array of char pointers in ecpglib.Michael Meskes2014-05-06
| | | | | | | | | | When array of char * was used as target for a FETCH statement returning more than one row, it tried to store all the result in the first element. Instead it should dump array of char pointers with right offset, use the address instead of the value of the C variable while reading the array and treat such variable as char **, instead of char * for pointer arithmetic. Patch by Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>
* Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.Tom Lane2014-05-05
| | | | | | | | | | | | | | | Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce the number of calls into sinvaladt.c (which require taking a shared lock) by keeping a local buffer of collected-but-not-yet-processed messages. However, if processing of the last message in a batch resulted in a recursive call to ReceiveSharedInvalidMessages, we could overwrite that message with a new one while the outer invalidation function was still working on it. This would be likely to lead to invalidation of the wrong cache entry, allowing subsequent processing to use stale cache data. The fix is just to make a local copy of each message while we're processing it. Spotted by Andres Freund. Back-patch to 8.4 where the bug was introduced.
* Fix failure to detoast fields in composite elements of structured types.Tom Lane2014-05-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we have an array of records stored on disk, the individual record fields cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are only prepared to deal with TOAST pointers appearing in top-level fields of a stored row. The same applies for ranges over composite types, nested composites, etc. However, the existing code only took care of expanding sub-field TOAST pointers for the case of nested composites, not for other structured types containing composites. For example, given a command such as UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ... where x is a direct reference to a field of an on-disk tuple, if that field is long enough to be toasted out-of-line then the TOAST pointer would be inserted as-is into the array column. If the source record for x is later deleted, the array field value would become a dangling pointer, leading to errors along the line of "missing chunk number 0 for toast value ..." when the value is referenced. A reproducible test case for this was provided by Jan Pecek, but it seems likely that some of the "missing chunk number" reports we've heard in the past were caused by similar issues. Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to produce a self-contained Datum value if the Datum is of composite type. Seen in this light, the problem is not just confined to arrays and ranges, but could also affect some other places where detoasting is done in that way, for example form_index_tuple(). I tried teaching the array code to apply toast_flatten_tuple_attribute() along with PG_DETOAST_DATUM() when the array element type is composite, but this was messy and imposed extra cache lookup costs whether or not any TOAST pointers were present, indeed sometimes when the array element type isn't even composite (since sometimes it takes a typcache lookup to find that out). The idea of extending that approach to all the places that currently use PG_DETOAST_DATUM() wasn't attractive at all. This patch instead solves the problem by decreeing that composite Datum values must not contain any out-of-line TOAST pointers in the first place; that is, we expand out-of-line fields at the point of constructing a composite Datum, not at the point where we're about to insert it into a larger tuple. This rule is applied only to true composite Datums, not to tuples that are being passed around the system as tuples, so it's not as invasive as it might sound at first. With this approach, the amount of code that has to be touched for a full solution is greatly reduced, and added cache lookup costs are avoided except when there actually is a TOAST pointer that needs to be inlined. The main drawback of this approach is that we might sometimes dereference a TOAST pointer that will never actually be used by the query, imposing a rather large cost that wasn't there before. On the other side of the coin, if the field value is used multiple times then we'll come out ahead by avoiding repeat detoastings. Experimentation suggests that common SQL coding patterns are unaffected either way, though. Applications that are very negatively affected could be advised to modify their code to not fetch columns they won't be using. In future, we might consider reverting this solution in favor of detoasting only at the point where data is about to be stored to disk, using some method that can drill down into multiple levels of nested structured types. That will require defining new APIs for structured types, though, so it doesn't seem feasible as a back-patchable fix. Note that this patch changes HeapTupleGetDatum() from a macro to a function call; this means that any third-party code using that macro will not get protection against creating TOAST-pointer-containing Datums until it's recompiled. The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER(). It seems likely that this is not a big problem in practice: most of the tuple-returning functions in core and contrib produce outputs that could not possibly be toasted anyway, and the same probably holds for third-party extensions. This bug has existed since TOAST was invented, so back-patch to all supported branches.
* Check for interrupts and stack overflow during rule/view dumps.Tom Lane2014-04-30
| | | | | | | | | Since ruleutils.c recurses, it could be driven to stack overflow by deeply nested constructs. Very large queries might also take long enough to deparse that a check for interrupts seems like a good idea. Stick appropriate tests into a couple of key places. Noted by Greg Stark. Back-patch to all supported branches.
* Add missing SYSTEMQUOTEsHeikki Linnakangas2014-04-30
| | | | | | | | Some popen() calls were missing SYSTEMQUOTEs, which caused initdb and pg_upgrade to fail on Windows, if the installation path contained both spaces and @ signs. Patch by Nikhil Deshpande. Backpatch to all supported versions.
* Fix two bugs in WAL-logging of GIN pending-list pages.Heikki Linnakangas2014-04-28
| | | | | | | | | | | | | | | | | | In writeListPage, never take a full-page image of the page, because we have all the information required to re-initialize in the WAL record anyway. Before this fix, a full-page image was always generated, unless full_page_writes=off, because when the page is initialized its LSN is always 0. In stable-branches, keep the code to restore the backup blocks if they exist, in case that the WAL is generated with an older minor version, but in master Assert that there are no full-page images. In the redo routine, add missing "off++". Otherwise the tuples are added to the page in reverse order. That happens to be harmless because we always scan and remove all the tuples together, but it was clearly wrong. Also, it was masked by the first bug unless full_page_writes=off, because the page was always restored from a full-page image. Backpatch to all supported versions.
* Reset pg_stat_activity.xact_start during PREPARE TRANSACTION.Tom Lane2014-04-24
| | | | | | | | | | | | | | | Once we've completed a PREPARE, our session is not running a transaction, so its entry in pg_stat_activity should show xact_start as null, rather than leaving the value as the start time of the now-prepared transaction. I think possibly this oversight was triggered by faulty extrapolation from the adjacent comment that says PrepareTransaction should not call AtEOXact_PgStat, so tweak the wording of that comment. Noted by Andres Freund while considering bug #10123 from Maxim Boguk, although this error doesn't seem to explain that report. Back-patch to all active branches.