aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Prevent improper reordering of antijoins vs. outer joins.Tom Lane2015-04-25
| | | | | | | | | | | An outer join appearing within the RHS of an antijoin can't commute with the antijoin, but somehow I missed teaching make_outerjoininfo() about that. In Teodor Sigaev's recent trouble report, this manifests as a "could not find RelOptInfo for given relids" error within eqjoinsel(); but I think silently wrong query results are possible too, if the planner misorders the joins and doesn't happen to trigger any internal consistency checks. It's broken as far back as we had antijoins, so back-patch to all supported branches.
* Perform RLS WITH CHECK before constraints, etcStephen Frost2015-04-24
| | | | | | | | | | | | | | | | | | | | | | | The RLS capability is built on top of the WITH CHECK OPTION system which was added for auto-updatable views, however, unlike WCOs on views (which are mandated by the SQL spec to not fire until after all other constraints and checks are done), it makes much more sense for RLS checks to happen earlier than constraint and uniqueness checks. This patch reworks the structure which holds the WCOs a bit to be explicitly either VIEW or RLS checks and the RLS-related checks are done prior to the constraint and uniqueness checks. This also allows better error reporting as we are now reporting when a violation is due to a WITH CHECK OPTION and when it's due to an RLS policy violation, which was independently noted by Craig Ringer as being confusing. The documentation is also updated to include a paragraph about when RLS WITH CHECK handling is performed, as there have been a number of questions regarding that and the documentation was previously silent on the matter. Author: Dean Rasheed, with some kabitzing and comment changes by me.
* Fix obsolete comment in set_rel_size().Tom Lane2015-04-24
| | | | | | | | | | | The cross-reference to set_append_rel_pathlist() was obsoleted by commit e2fa76d80ba571d4de8992de6386536867250474, which split what had been set_rel_pathlist() and child routines into two sets of functions. But I (tgl) evidently missed updating this comment. Back-patch to 9.2 to avoid unnecessary divergence among branches. Amit Langote
* Add comments explaining how unique and exclusion constraints are enforced.Heikki Linnakangas2015-04-24
|
* Copy the relation name for error reporting in WCOsStephen Frost2015-04-24
| | | | | | | | | | | In get_row_security_policies(), we need to make a copy of the relation name when building the WithCheckOptions structure, since RelationGetRelationName just returns a pointer into the local Relation structure. The relation name in the WCO structure is only used for error reporting. Pointed out by Robert and Christian Ullrich, who noted that the buildfarm members with -DCLOBBER_CACHE_ALWAYS were failing.
* Move functions related to index maintenance to separate source file.Heikki Linnakangas2015-04-24
| | | | | There is enough code here to deserve a file of their own, not be buried in the middle of execUtils.c.
* Fix deadlock at startup, if max_prepared_transactions is too small.Heikki Linnakangas2015-04-23
| | | | | | | | | | | | | | | When the startup process recovers transactions by scanning pg_twophase directory, it should clear MyLockedGxact after it's done processing each transaction. Like we do during normal operation, at PREPARE TRANSACTION. Otherwise, if the startup process exits due to an error, it will try to clear the locking_backend field of the last recovered transaction. That's usually harmless, but if the error happens in MarkAsPreparing, while holding TwoPhaseStateLock, the shmem-exit hook will try to acquire TwoPhaseStateLock again, and deadlock with itself. This fixes bug #13128 reported by Grant McAlister. The bug was introduced by commit bb38fb0d, so backpatch to all supported versions like that commit.
* Use the right type OID after creating a shell typeAlvaro Herrera2015-04-22
| | | | | | | | | | | | | Commit a2e35b53c39b2a neglected to update the type OID to use further down in DefineType when TypeShellMake was changed to return ObjectAddress instead of OID (it got it right in DefineRange, however.) This resulted in an internal error message being issued when looking up I/O functions. Author: Michael Paquier Also add Asserts() to a couple of other places to ensure that the type OID being used is as expected.
* RLS fixes, new hooks, and new test moduleStephen Frost2015-04-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In prepend_row_security_policies(), defaultDeny was always true, so if there were any hook policies, the RLS policies on the table would just get discarded. Fixed to start off with defaultDeny as false and then properly set later if we detect that only the default deny policy exists for the internal policies. The infinite recursion detection in fireRIRrules() didn't properly manage the activeRIRs list in the case of WCOs, so it would incorrectly report infinite recusion if the same relation with RLS appeared more than once in the rtable, for example "UPDATE t ... FROM t ...". Further, the RLS expansion code in fireRIRrules() was handling RLS in the main loop through the rtable, which lead to RTEs being visited twice if they contained sublink subqueries, which prepend_row_security_policies() attempted to handle by exiting early if the RTE already had securityQuals. That doesn't work, however, since if the query involved a security barrier view on top of a table with RLS, the RTE would already have securityQuals (from the view) by the time fireRIRrules() was invoked, and so the table's RLS policies would be ignored. This is fixed in fireRIRrules() by handling RLS in a separate loop at the end, after dealing with any other sublink subqueries, thus ensuring that each RTE is only visited once for RLS expansion. The inheritance planner code didn't correctly handle non-target relations with RLS, which would get turned into subqueries during planning. Thus an update of the form "UPDATE t1 ... FROM t2 ..." where t1 has inheritance and t2 has RLS quals would fail. Fix by making sure to copy in and update the securityQuals when they exist for non-target relations. process_policies() was adding WCOs to non-target relations, which is unnecessary, and could lead to a lot of wasted time in the rewriter and the planner. Fix by only adding WCO policies when working on the result relation. Also in process_policies, we should be copying the USING policies to the WITH CHECK policies on a per-policy basis, fix by moving the copying up into the per-policy loop. Lastly, as noted by Dean, we were simply adding policies returned by the hook provided to the list of quals being AND'd, meaning that they would actually restrict records returned and there was no option to have internal policies and hook-based policies work together permissively (as all internal policies currently work). Instead, explicitly add support for both permissive and restrictive policies by having a hook for each and combining the results appropriately. To ensure this is all done correctly, add a new test module (test_rls_hooks) to test the various combinations of internal, permissive, and restrictive hook policies. Largely from Dean Rasheed (thanks!): CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com Author: Dean Rasheed, though I added the new hooks and test module.
* Pull in tableoid for inheiritance with rowMarksStephen Frost2015-04-22
| | | | | | | | | | | | | | | | As noted by Etsuro Fujita [1] and Dean Rasheed[2], cb1ca4d800621dcae67ca6c799006de99fa4f0a5 changed ExecBuildAuxRowMark() to always look for the tableoid in the target list, but didn't also change preprocess_targetlist() to always include the tableoid. This resulted in errors with soon-to-be-added RLS with inheritance tests, and errors when using inheritance with foreign tables. Authors: Etsuro Fujita and Dean Rasheed (independently) Minor word-smithing on the comments by me. [1] 552CF0B6.8010006@lab.ntt.co.jp [2] CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
* Rename pg_replication_slot's new active_in to active_pid.Andres Freund2015-04-22
| | | | | | | In d811c037ce active_in was added but discussion since showed that active_pid is preferred as a name. Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
* Add 'active_in' column to pg_replication_slots.Andres Freund2015-04-21
| | | | | | | | | | | | | | Right now it is visible whether a replication slot is active in any session, but not in which. Adding the active_in column, containing the pid of the backend having acquired the slot, makes it much easier to associate pg_replication_slots entries with the corresponding pg_stat_replication/pg_stat_activity row. This should have been done from the start, but I (Andres) dropped the ball there somehow. Author: Craig Ringer, revised by me Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
* Honor OID status of CREATE LIKE'd tablesBruce Momjian2015-04-20
| | | | | | Previously, tables created by CREATE LIKE never had OIDs. Report by Tom Lane
* Fix typo in relcache's equalPolicy()Stephen Frost2015-04-17
| | | | | | | | | | | | | The USING policies were not being checked for differences as the same policy was being passed in to both sides of the equal(). This could result in backends not realizing that a policy had been changed, if none of the other attributes had been changed. Fix by passing to equal() the policy1 and policy2 using quals for comparison. No need to back-patch as this is not yet released. Noticed while testing changes to RLS proposed by Dean Rasheed.
* Fix assertion failure in logical decoding.Heikki Linnakangas2015-04-16
| | | | | | | | | | | | | | | | | Logical decoding set SnapshotData's regd_count field to avoid the snapshot manager from prematurely freeing snapshots that are generated by the decoding system. That was always an abuse of the field, as it was never supposed to be used outside the snapshot manager. Commit 94028691 made snapshot manager's tracking of the snapshots smarter, and that scheme fell apart. The snapshot manager got confused and hit the assertion, when a snapshot that was marked with regd_count==1 was not found in the heap, where the snapshot manager tracks registered the snapshots. To fix, don't abuse the regd_count field like that. Logical decoding still abuses the active_count field for similar purposes, but that's currently harmless. The assertion failure was first reported by Michael Paquier
* Fix logic to skip checkpoint if no records have been inserted.Heikki Linnakangas2015-04-15
| | | | | | | | | | | | | | | | | | | | | | | | After the WAL format changes, the calculation of the size of a checkpoint record became incorrect. Instead of trying to fix the math, check that the previous record, i.e. the xl_prev value that we'd write for the next record, matches the last checkpoint's redo pointer. That way it's not dependent on the size of the checkpoint record at all. The old logic was actually slightly wrong all along: if the previous checkpoint record crossed a page boundary, the page headers threw off the record size calculation, and the checkpoint was not skipped. The new checkpoint would not cross a page boundary, so this only resulted in at most one extra checkpoint after the system became idle. The new logic fixes that. (It's not worth fixing in backbranches). However, it makes some sense to try to keep the latest checkpoint contained fully in a page, or at least in a single WAL segment, just on general robustness grounds. If something goes awfully wrong, it's more likely that you can recover the latest WAL segment, than the last two WAL segments. So I added an extra check that the checkpoint is not skipped if the previous checkpoint crossed a WAL segment. Reported by Jeff Janes.
* Integrate pg_upgrade_support module into backendPeter Eisentraut2015-04-14
| | | | | | | | | | | | | | | | Previously, these functions were created in a schema "binary_upgrade", which was deleted after pg_upgrade was finished. Because we don't want to keep that schema around permanently, move them to pg_catalog but rename them with a binary_upgrade_... prefix. The provided functions are only small wrappers around global variables that were added specifically for pg_upgrade use, so keeping the module separate does not create any modularity. The functions still check that they are only called in binary upgrade mode, so it is not possible to call these during normal operation. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Fix typo in commentAlvaro Herrera2015-04-14
| | | | | | | | SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT I introduced this ancient typo in subtrans.c and later propagated it to multixact.c. I fixed the latter in f741300c, but only back to 9.3; backpatch to all supported branches for consistency.
* Reorganize our CRC source files again.Heikki Linnakangas2015-04-14
| | | | | | | | | | Now that we use CRC-32C in WAL and the control file, the "traditional" and "legacy" CRC-32 variants are not used in any frontend programs anymore. Move the code for those back from src/common to src/backend/utils/hash. Also move the slicing-by-8 implementation (back) to src/port. This is in preparation for next patch that will add another implementation that uses Intel SSE 4.2 instructions to calculate CRC-32C, where available.
* Remove duplicated word in READMEAlvaro Herrera2015-04-13
|
* Don't archive bogus recycled or preallocated files after timeline switch.Heikki Linnakangas2015-04-13
| | | | | | | | | | | | | | | | | | | After a timeline switch, we would leave behind recycled WAL segments that are in the future, but on the old timeline. After promotion, and after they become old enough to be recycled again, we would notice that they don't have a .ready or .done file, create a .ready file for them, and archive them. That's bogus, because the files contain garbage, recycled from an older timeline (or prealloced as zeros). We shouldn't archive such files. This could happen when we're following a timeline switch during replay, or when we switch to new timeline at end-of-recovery. To fix, whenever we switch to a new timeline, scan the data directory for WAL segments on the old timeline, but with a higher segment number, and remove them. Those don't belong to our timeline history, and are most likely bogus recycled or preallocated files. They could also be valid files that we streamed from the primary ahead of time, but in any case, they're not needed to recover to the new timeline.
* Add system view pg_stat_sslMagnus Hagander2015-04-12
| | | | | | | | This view shows information about all connections, such as if the connection is using SSL, which cipher is used, and which client certificate (if any) is used. Reviews by Alex Shulgin, Heikki Linnakangas, Andres Freund & Michael Paquier
* Remove duplicated words in comments.Heikki Linnakangas2015-04-12
| | | | David Rowley
* Optimize locking a tuple already locked by another subxactAlvaro Herrera2015-04-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Locking and updating the same tuple repeatedly led to some strange multixacts being created which had several subtransactions of the same parent transaction holding locks of the same strength. However, once a subxact of the current transaction holds a lock of a given strength, it's not necessary to acquire the same lock again. This made some coding patterns much slower than required. The fix is twofold. First we change HeapTupleSatisfiesUpdate to return HeapTupleBeingUpdated for the case where the current transaction is already a single-xid locker for the given tuple; it used to return HeapTupleMayBeUpdated for that case. The new logic is simpler, and the change to pgrowlocks is a testament to that: previously we needed to check for the single-xid locker separately in a very ugly way. That test is simpler now. As fallout from the HTSU change, some of its callers need to be amended so that tuple-locked-by-own-transaction is taken into account in the BeingUpdated case rather than the MayBeUpdated case. For many of them there is no difference; but heap_delete() and heap_update now check explicitely and do not grab tuple lock in that case. The HTSU change also means that routine MultiXactHasRunningRemoteMembers introduced in commit 11ac4c73cb895 is no longer necessary and can be removed; the case that used to require it is now handled naturally as result of the changes to heap_delete and heap_update. The second part of the fix to the performance issue is to adjust heap_lock_tuple to avoid the slowness: 1. Previously we checked for the case that our own transaction already held a strong enough lock and returned MayBeUpdated, but only in the multixact case. Now we do it for the plain Xid case as well, which saves having to LockTuple. 2. If the current transaction is the only locker of the tuple (but with a lock not as strong as what we need; otherwise it would have been caught in the check mentioned above), we can skip sleeping on the multixact, and instead go straight to create an updated multixact with the additional lock strength. 3. Most importantly, make sure that both the single-xid-locker case and the multixact-locker case optimization are applied always. We do this by checking both in a single place, rather than them appearing in two separate portions of the routine -- something that is made possible by the HeapTupleSatisfiesUpdate API change. Previously we would only check for the single-xid case when HTSU returned MayBeUpdated, and only checked for the multixact case when HTSU returned BeingUpdated. This was at odds with what HTSU actually returned in one case: if our own transaction was locker in a multixact, it returned MayBeUpdated, so the optimization never applied. This is what led to the large multixacts in the first place. Per bug report #8470 by Oskari Saarenmaa.
* Fix typo in eb68379c3.Andres Freund2015-04-09
| | | | | | | | I'd accidentally missed to rename PG_FORCE_NULL to BKI_FORCE_NULL in one place. Author: Jeevan Chalke Discussion: CAM2+6=VPoow5PqgqiTjPX4QNeokb7op8aD_8Zg3QnHZMvvU0GQ@mail.gmail.com
* Remove obsolete FORCE option from REINDEX.Fujii Masao2015-04-09
| | | | | | FORCE option has been marked "obsolete" since very old version 7.4 but existed for backwards compatibility. Per discussion on pgsql-hackers, we concluded that it's no longer worth keeping supporting the option.
* Change SQLSTATE for event triggers "wrong context" messageAlvaro Herrera2015-04-08
| | | | | | | | | | | | | When certain event-trigger-only functions are called when not in the wrong context, they were reporting the "feature not supported" SQLSTATE, which is somewhat misleading. Create a new custom error code for such uses instead. Not backpatched since it may be seen as an undesirable behavioral change. Author: Michael Paquier Discussion: https://www.postgresql.org/message-id/CAB7nPqQ-5NAkHQHh_NOm7FPep37NCiLKwPoJ2Yxb8TDoGgbYYA@mail.gmail.com
* Fix autovacuum launcher shutdown sequenceAlvaro Herrera2015-04-08
| | | | | | | | | | | | | | | It was previously possible to have the launcher re-execute its main loop before shutting down if some other signal was received or an error occurred after getting SIGTERM, as reported by Qingqing Zhou. While investigating, Tom Lane further noticed that if autovacuum had been disabled in the config file, it would misbehave by trying to start a new worker instead of bailing out immediately -- it would consider itself as invoked in emergency mode. Fix both problems by checking the shutdown flag in a few more places. These problems have existed since autovacuum was introduced, so backpatch all the way back.
* Fix typo in comment.Fujii Masao2015-04-08
|
* Make trace_sort control abbreviation debug output for the text opclass.Robert Haas2015-04-07
| | | | | | | | This is consistent with what the new numeric suppor for abbreviated keys now does, and seems much more convenient than having a separate compiler define to control this debug output. Peter Geoghegan
* Remove variable shadowingAlvaro Herrera2015-04-07
| | | | | | Commit a2e35b53 should have removed the variable declaration in the inner block, but didn't. As a result, the returned address might end up not being what was intended.
* pg_event_trigger_dropped_objects: add is_temp columnAlvaro Herrera2015-04-06
| | | | | | | | | | It now also reports temporary objects dropped that are local to the backend. Previously we weren't reporting any temp objects because it was deemed unnecessary; but as it turns out, it is necessary if we want to keep close track of DDL command execution inside one session. Temp objects are reported as living in schema pg_temp, which works because such a schema-qualification always refers to the temp objects of the current session.
* Fix object identities for pg_conversion objectsAlvaro Herrera2015-04-06
| | | | | | This was already fixed in 0d906798f, but I failed to update the array-formatted case. This is not backpatched, since this only affects the code path introduced by commit a676201490c.
* Reduce lock levels of some trigger DDL and add FKsSimon Riggs2015-04-05
| | | | | | | | | | | | Reduce lock levels to ShareRowExclusive for the following SQL CREATE TRIGGER (but not DROP or ALTER) ALTER TABLE ENABLE TRIGGER ALTER TABLE DISABLE TRIGGER ALTER TABLE … ADD CONSTRAINT FOREIGN KEY Original work by Simon Riggs, extracted and refreshed by Andreas Karlsson New test cases added by Andreas Karlsson Reviewed by Noah Misch, Andres Freund, Michael Paquier and Simon Riggs
* Fix incorrect matching of subexpressions in outer-join plan nodes.Tom Lane2015-04-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we would re-use input subexpressions in all expression trees attached to a Join plan node. However, if it's an outer join and the subexpression appears in the nullable-side input, this is potentially incorrect for apparently-matching subexpressions that came from above the outer join (ie, targetlist and qpqual expressions), because the executor will treat the subexpression value as NULL when maybe it should not be. The case is fairly hard to hit because (a) you need a non-strict subexpression (else NULL is correct), and (b) we don't usually compute expressions in the outputs of non-toplevel plan nodes. But we might do so if the expressions are sort keys for a mergejoin, for example. Probably in the long run we should make a more explicit distinction between Vars appearing above and below an outer join, but that will be a major planner redesign and not at all back-patchable. For the moment, just hack set_join_references so that it will not match any non-Var expressions coming from nullable inputs to expressions that came from above the join. (This is somewhat overkill, in that a strict expression could still be matched, but it doesn't seem worth the effort to check that.) Per report from Qingqing Zhou. The added regression test case is based on his example. This has been broken for a very long time, so back-patch to all active branches.
* Fix numeric abbreviation for --disable-float8-byval.Robert Haas2015-04-03
| | | | | | | | | | When committing abd94bcac4582903765be7be959d1dbc121df0d0, I tried to make it decide what kind of abbreviation to use based only on SIZEOF_DATUM, without regard to USE_FLOAT8_BYVAL. That attempt was a few bricks short of a load, so try to fix it, and add a comment explaining what we're about. Patch by me; review (but not a full endorsement) by Andrew Gierth.
* Remove unnecessary variables in _hash_splitbucket().Tom Lane2015-04-03
| | | | | | | | | | | Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass start_nblkno to _hash_splitbucket(), and for that matter unnecessary to have the internal nblkno variable either. My compiler didn't complain about that, but some did. I also rearranged the use of oblkno a bit to make that case more parallel. Report and initial patch by Petr Jelinek, rearranged a bit by me. Back-patch to all branches, like the previous patch.
* Transform ALTER TABLE/SET TYPE/USING expr during parse analysisAlvaro Herrera2015-04-03
| | | | | | | | | | | | | | | This lets later stages have access to the transformed expression; in particular it allows DDL-deparsing code during event triggers to pass the transformed expression to ruleutils.c, so that the complete command can be deparsed. This shuffles the timing of the transform calls a bit: previously, nothing was transformed during parse analysis, and only the RELKIND_RELATION case was being handled during execution. After this patch, all expressions are transformed during parse analysis (including those for relkinds other than RELATION), and the error for other relation kinds is thrown only during execution. So we do more work than before to reject some bogus cases. That seems acceptable.
* Add log_min_autovacuum_duration per-table optionAlvaro Herrera2015-04-03
| | | | | | | | | This is useful to control autovacuum log volume, for situations where monitoring only a set of tables is necessary. Author: Michael Paquier Reviewed by: A team led by Naoya Anzai (also including Akira Kurosawa, Taiki Kondo, Huong Dangminh), Fujii Masao.
* Have autovacuum workers listen to SIGHUP, tooAlvaro Herrera2015-04-03
| | | | | | | They have historically ignored it, but it's been said to be useful at times to change their settings mid-flight. Author: Michael Paquier
* Fix error handling of XLogReaderAllocate in case of OOMFujii Masao2015-04-03
| | | | | | | | | | | Similarly to previous fix 9b8d478, commit 2c03216 has switched XLogReaderAllocate() to use a set of palloc calls instead of malloc, causing any callers of this function to fail with an error instead of receiving a NULL pointer in case of out-of-memory error. Fix this by using palloc_extended with MCXT_ALLOC_NO_OOM that will safely return NULL in case of an OOM. Michael Paquier, slightly modified by me.
* Change the way we decide whether to give up on abbreviated text keys.Robert Haas2015-04-03
| | | | | | | | | Be more aggressive about aborting early on if it looks like it's not helping, but be less aggressive about aborting later on, since it's more expensive at that point, and also since we're currently aborting in some cases where abbreviation can still deliver a substantial win. Peter Geoghegan. Extensive testing by Tomas Vondra.
* Rework handling of OOM when allocating record buffer in XLOG reader.Fujii Masao2015-04-03
| | | | | | | | | | Commit 2c03216 changed allocate_recordbuf() so that it uses a palloc to allocate the read buffer and fails immediately when an out-of-memory error shows up, even though its callers still expect that NULL is returned in that case. This bug is fixed making allocate_recordbuf() use a palloc_extended with MCXT_ALLOC_NO_OOM flag and return NULL in OOM case. Michael Paquier
* Add palloc_extended for frontend and backend.Fujii Masao2015-04-03
| | | | | | | | | | This commit also adds pg_malloc_extended for frontend. These interfaces can be used to control at a lower level memory allocation using an interface similar to MemoryContextAllocExtended. For example, the callers can specify MCXT_ALLOC_NO_OOM if they want to suppress the "out of memory" error while allocating the memory and handle a NULL return value. Michael Paquier, reviewed by me.
* Fix rare startup failure induced by MVCC-catalog-scans patch.Tom Lane2015-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | While a new backend nominally participates in sinval signaling starting from the SharedInvalBackendInit call near the top of InitPostgres, it cannot recognize sinval messages for unshared catalogs of its database until it has set up MyDatabaseId. This is not problematic for the catcache or relcache, which by definition won't have loaded any data from or about such catalogs before that point. However, commit 568d4138c646cd7c introduced a mechanism for re-using MVCC snapshots for catalog scans, and made invalidation of those depend on recognizing relevant sinval messages. So it's possible to establish a catalog snapshot to read pg_authid and pg_database, then before we set MyDatabaseId, receive sinval messages that should result in invalidating that snapshot --- but do not, because we don't realize they are for our database. This mechanism explains the intermittent buildfarm failures we've seen since commit 31eae6028eca4365. That commit was not itself at fault, but it introduced a new regression test that does reconnections concurrently with the "vacuum full pg_am" command in vacuum.sql. This allowed the pre-existing error to be exposed, given just the right timing, because we'd fail to update our information about how to access pg_am. In principle any VACUUM FULL on a system catalog could have created a similar hazard for concurrent incoming connections. Perhaps there are more subtle failure cases as well. To fix, force invalidation of the catalog snapshot as soon as we've set MyDatabaseId. Back-patch to 9.4 where the error was introduced.
* Repair stupid mistake in preprocessor directive.Robert Haas2015-04-02
|
* After a crash, don't restart workers with BGW_NEVER_RESTART.Robert Haas2015-04-02
| | | | Amit Khandekar
* Use abbreviated keys for faster sorting of numeric datums.Robert Haas2015-04-02
| | | | Andrew Gierth, reviewed by Peter Geoghegan, with further tweaks by me.
* autovacuum: Fix polarity of "wraparound" variableAlvaro Herrera2015-04-02
| | | | | | | | | | | Commit 0d831389749a3 inadvertently reversed the meaning of the wraparound variable. This causes vacuums which are not required for wraparound to wait for locks to be acquired, and what is worse, it allows wraparound vacuums to skip locked pages. Bug reported by Jeff Janes in http://www.postgresql.org/message-id/CAMkU=1xmTEiaY=5oMHsSQo5vd9V1Ze4kNLL0qN2eH0P_GXOaYw@mail.gmail.com Analysis and patch by Kyotaro HORIGUCHI
* Add missing calls to DatumGetUInt32.Robert Haas2015-04-02
| | | | | | | These were inadvertently ommitted from the commit that introduced abbreviated keys, commit 4ea51cdfe85ceef8afabceb03c446574daa0ac23. Peter Geoghegan