aboutsummaryrefslogtreecommitdiff
path: root/contrib/test_decoding
Commit message (Collapse)AuthorAge
...
* Fix race condition leading to hanging logical slot creation.Andres Freund2017-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The snapshot assembly during the creation of logical slots relied waiting for transactions in xl_running_xacts to end, by checking for their commit/abort records. Unfortunately, despite locking, it is possible to see an xl_running_xact record listing transactions as ready, that have already WAL-logged an commit/abort record, as the locking just prevents the ProcArray to be adjusted, and the commit record has to be logged first. That lead to either delayed or hanging snapshot creation, because snapbuild.c would wait "forever" to see commit/abort records for some transactions. That hang resolved only if a xl_running_xacts record without any running transactions happened to be logged, far from certain on a busy server. It's impractical to prevent that via more heavyweight locking, the likelihood of deadlocks and significantly increased contention would be too big. Instead change the initial snapshot creation to be solely based on tracking the oldest running transaction via xl_running_xacts->oldestRunningXid - that actually ends up significantly simplifying the code. That has two disadvantages: 1) Because we cannot fully "trust" the contents of xl_running_xacts, we cannot use it to build the initial snapshot. Instead we have to wait twice for all running transactions to finish. 2) Previously a slot, unless the race occurred, could be created when the all transaction perceived as running based on commit/abort records, now we have to wait for the next xl_running_xacts record. To address that, trigger logging new xl_running_xacts record from within snapbuild.c exactly when necessary. Unfortunately snabuild.c's SnapBuild is stored on disk, one of the stupider ideas of a certain Mr Freund, so we can't change it in a minor release. As this is going to be backpatched, we have to hack around a bit to keep on-disk compatibility. A later commit will rejigger that on master. Author: Andres Freund, based on a quite different patch from Petr Jelinek Analyzed-By: Petr Jelinek Reviewed-By: Petr Jelinek Discussion: https://postgr.es/m/f37e975c-908f-858e-707f-058d3b1eb214@2ndquadrant.com Backpatch: 9.4-, where logical decoding has been introduced
* Rename WAL-related functions and views to use "lsn" not "location".Tom Lane2017-05-11
| | | | | | | | | | | | | | | Per discussion, "location" is a rather vague term that could refer to multiple concepts. "LSN" is an unambiguous term for WAL locations and should be preferred. Some function names, view column names, and function output argument names used "lsn" already, but others used "location", as well as yet other terms such as "wal_position". Since we've already renamed a lot of things in this area from "xlog" to "wal" for v10, we may as well incur a bit more compatibility pain and make these names all consistent. David Rowley, minor additional docs hacking by me Discussion: https://postgr.es/m/CAKJS1f8O0njDKe8ePFQ-LK5-EjwThsDws6ohJ-+c6nWK+oUxtg@mail.gmail.com
* Force synchronous commit in new-ish test_decoding test.Andres Freund2017-04-04
| | | | This was missed in a924c327 ff.
* Improve isolation tests infrastructure.Andres Freund2017-03-14
| | | | | | | | | | | | | | | | | | | | Previously if a directory had both isolationtester and plain regression tests, they couldn't be run in parallel, because they'd access the same files/directories. That, so far, only affected contrib/test_decoding. Rather than fix that locally in contrib/test_decoding, improve pg_regress_isolation_[install]check to use separate resources from plain regression tests. That requires a minor change in pg_regress, namely that the --outputdir is created if not already existing, that seems like good idea anyway. Use the improved helpers even where previously not used. Author: Tom Lane and Andres Freund Discussion: https://postgr.es/m/20170311194831.vm5ikpczq52c2drg@alap3.anarazel.de
* Remove all references to "xlog" from SQL-callable functions in pg_proc.Robert Haas2017-02-09
| | | | | | | | | | | | | | Commit f82ec32ac30ae7e3ec7c84067192535b2ff8ec0e renamed the pg_xlog directory to pg_wal. To make things consistent, and because "xlog" is terrible terminology for either "transaction log" or "write-ahead log" rename all SQL-callable functions that contain "xlog" in the name to instead contain "wal". (Note that this may pose an upgrade hazard for some users.) Similarly, rename the xlog_position argument of the functions that create slots to be called wal_position. Discussion: https://www.postgresql.org/message-id/CA+Tgmob=YmA=H3DbW1YuOXnFVgBheRmyDkWcD9M8f=5bGWYEoQ@mail.gmail.com
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Various temporary slots test improvementsPeter Eisentraut2016-12-15
| | | | | | | | | Fix the tests on slow machines (per buildfarm). Add test for dropping on error. And also try to consume real changes from temporary slots. From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Fix race condition in test_decoding "slot" test.Tom Lane2016-12-12
| | | | | | | | | | | This test, just added in commit a924c327e, sometimes fails because the old backend hasn't finished dropping the temporary replication slot when the new backend looks. Borrow the previously-invented methodology for waiting for the old process to disappear from pg_stat_activity. Petr Jelinek Discussion: https://postgr.es/m/62935e6f-4f1b-c433-e0fa-7f936a38b3e5@2ndquadrant.com
* Add support for temporary replication slotsPeter Eisentraut2016-12-12
| | | | | | | This allows creating temporary replication slots that are removed automatically at the end of the session or on error. From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Make contrib/test_decoding regression tests safe for CZ locale.Tom Lane2016-11-21
| | | | | | | | A little COLLATE "C" goes a long way. Pavel Stehule, per suggestion from Craig Ringer Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
* psql: Split up "Modifiers" column in \d and \dDPeter Eisentraut2016-11-03
| | | | | | Make separate columns "Collation", "Nullable", "Default". Reviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>
* Fix further hash table order dependent tests.Andres Freund2016-10-12
| | | | | | | Similar to 0137caf273, this makes contrib and pl tests less dependant on hash-table order. After this commit, at least some order affecting changes to execGrouping.c don't result in regression test changes anymore.
* Correct logical decoding restore behaviour for subtransactions.Andres Freund2016-10-03
| | | | | | | | | | | | | | | | | | | Before initializing iteration over a subtransaction's changes, the last few changes were not spilled to disk. That's correct if the transaction didn't spill to disk, but otherwise... This bug can lead to missed or misorderd subtransaction contents when they were spilled to disk. Move spilling of the remaining in-memory changes to ReorderBufferIterTXNInit(), where it can easily be applied to the top transaction and, if present, subtransactions. Since this code had too many bugs already, noticeably increase test coverage. Fixes: #14319 Reported-By: Huan Ruan Discussion: <20160909012610.20024.58169@wrigleys.postgresql.org> Backport: 9,4-, where logical decoding was added
* Add macros to make AllocSetContextCreate() calls simpler and safer.Tom Lane2016-08-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls had typos in the context-sizing parameters. While none of these led to especially significant problems, they did create minor inefficiencies, and it's now clear that expecting people to copy-and-paste those calls accurately is not a great idea. Let's reduce the risk of future errors by introducing single macros that encapsulate the common use-cases. Three such macros are enough to cover all but two special-purpose contexts; those two calls can be left as-is, I think. While this patch doesn't in itself improve matters for third-party extensions, it doesn't break anything for them either, and they can gradually adopt the simplified notation over time. In passing, change TopMemoryContext to use the default allocation parameters. Formerly it could only be extended 8K at a time. That was probably reasonable when this code was written; but nowadays we create many more contexts than we did then, so that it's not unusual to have a couple hundred K in TopMemoryContext, even without considering various dubious code that sticks other things there. There seems no good reason not to let it use growing blocks like most other contexts. Back-patch to 9.6, mostly because that's still close enough to HEAD that it's easy to do so, and keeping the branches in sync can be expected to avoid some future back-patching pain. The bugs fixed by these changes don't seem to be significant enough to justify fixing them further back. Discussion: <21072.1472321324@sss.pgh.pa.us>
* Don't propagate a null subtransaction snapshot up to parent transaction.Tom Lane2016-08-07
| | | | | | | | | This oversight could cause logical decoding to fail to decode an outer transaction containing changes, if a subtransaction had an XID but no actual changes. Per bug #14279 from Marko Tiikkaja. Patch by Marko based on analysis by Andrew Gierth. Discussion: <20160804191757.1430.39011@wrigleys.postgresql.org>
* Establish conventions about global object names used in regression tests.Tom Lane2016-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | To ensure that "make installcheck" can be used safely against an existing installation, we need to be careful about what global object names (database, role, and tablespace names) we use; otherwise we might accidentally clobber important objects. There's been a weak consensus that test databases should have names including "regression", and that test role names should start with "regress_", but we didn't have any particular rule about tablespace names; and neither of the other rules was followed with any consistency either. This commit moves us a long way towards having a hard-and-fast rule that regression test databases must have names including "regression", and that test role and tablespace names must start with "regress_". It's not completely there because I did not touch some test cases in rolenames.sql that test creation of special role names like "session_user". That will require some rethinking of exactly what we want to test, whereas the intent of this patch is just to hit all the cases in which the needed renamings are cosmetic. There is no enforcement mechanism in this patch either, but if we don't add one we can expect that the tests will soon be violating the convention again. Again, that's not such a cosmetic change and it will require discussion. (But I did use a quick-hack enforcement patch to find these cases.) Discussion: <16638.1468620817@sss.pgh.pa.us>
* pgindent run for 9.6Robert Haas2016-06-09
|
* Fix core dump in ReorderBufferRestoreChange on alignment-picky platforms.Tom Lane2016-04-14
| | | | | | | | | | | | | | | | | | | | | | | | When re-reading an update involving both an old tuple and a new tuple from disk, reorderbuffer.c was careless about whether the new tuple is suitably aligned for direct access --- in general, it isn't. We'd missed seeing this in the buildfarm because the contrib/test_decoding tests exercise this code path only a few times, and by chance all of those cases have old tuples with length a multiple of 4, which is usually enough to make the access to the new tuple's t_len safe. For some still-not-entirely-clear reason, however, Debian's sparc build gets a bus error, as reported by Christoph Berg; perhaps it's assuming 8-byte alignment of the pointer? The lack of previous field reports is probably because you need all of these conditions to trigger a crash: an alignment-picky platform (not Intel), a transaction large enough to spill to disk, an update within that xact that changes a primary-key field and has an odd-length old tuple, and of course logical decoding tracing the transaction. Avoid the alignment assumption by using memcpy instead of fetching t_len directly, and add a test case that exposes the crash on picky platforms. Back-patch to 9.4 where the bug was introduced. Discussion: <20160413094117.GC21485@msg.credativ.de>
* Add required database and origin filtering for logical messages.Andres Freund2016-04-13
| | | | | | | | | | | | | | | | | Logical messages, added in 3fe3511d05, during decoding failed to filter messages emitted in other databases and messages emitted "under" a replication origin the output plugin isn't interested in. Add tests to verify that both types of filtering actually work. While touching message.sql remove hunk obsoleted by d25379e. Bump XLOG_PAGE_MAGIC because xl_logical_message changed and because 3fe3511d05 had omitted doing so. 3fe3511d05 additionally didn't bump catversion, but 7a542700d has done so since. Author: Petr Jelinek Reported-By: Andres Freund Discussion: 20160406142513.wotqy3ba3kanr423@alap3.anarazel.de
* Modify test_decoding/messages to remove non-ascii charsSimon Riggs2016-04-06
|
* Generic Messages for Logical DecodingSimon Riggs2016-04-06
| | | | | | | | | | | | | | | | API and mechanism to allow generic messages to be inserted into WAL that are intended to be read by logical decoding plugins. This commit adds an optional new callback to the logical decoding API. Messages are either text or bytea. Messages can be transactional, or not, and are identified by a prefix to allow multiple concurrent decoding plugins. (Not to be confused with Generic WAL records, which are intended to allow crash recovery of extensible objects.) Author: Petr Jelinek and Andres Freund Reviewers: Artur Zakirov, Tomas Vondra, Simon Riggs Discussion: 5685F999.6010202@2ndquadrant.com
* Fix phony .PHONY.Tom Lane2016-03-19
| | | | A couple makefiles had misspelled the magic .PHONY target as PHONY.
* logical decoding: Fix handling of large old tuples with replica identity full.Andres Freund2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | When decoding the old version of an UPDATE or DELETE change, and if that tuple was bigger than MaxHeapTupleSize, we either Assert'ed out, or failed in more subtle ways in non-assert builds. Normally individual tuples aren't bigger than MaxHeapTupleSize, with big datums toasted. But that's not the case for the old version of a tuple for logical decoding; the replica identity is logged as one piece. With the default replica identity btree limits that to small tuples, but that's not the case for FULL. Change the tuple buffer infrastructure to separate allocate over-large tuples, instead of always going through the slab cache. This unfortunately requires changing the ReorderBufferTupleBuf definition, we need to store the allocated size someplace. To avoid requiring output plugins to recompile, don't store HeapTupleHeaderData directly after HeapTupleData, but point to it via t_data; that leaves rooms for the allocated size. As there's no reason for an output plugin to look at ReorderBufferTupleBuf->t_data.header, remove the field. It was just a minor convenience having it directly accessible. Reported-By: Adam DratwiƄski Discussion: CAKg6ypLd7773AOX4DiOGRwQk1TVOQKhNwjYiVjJnpq8Wo+i62Q@mail.gmail.com
* logical decoding: old/newtuple in spooled UPDATE changes was switched around.Andres Freund2016-03-05
| | | | | | | | | | | | | | | Somehow I managed to flip the order of restoring old & new tuples when de-spooling a change in a large transaction from disk. This happens to only take effect when a change is spooled to disk which has old/new versions of the tuple. That only is the case for UPDATEs where he primary key changed or where replica identity is changed to FULL. The tests didn't catch this because either spooled updates, or updates that changed primary keys, were tested; not both at the same time. Found while adding tests for the following commit. Backpatch: 9.4, where logical decoding was added
* logical decoding: Tell reorderbuffer about all xids.Andres Freund2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | Logical decoding's reorderbuffer keeps transactions in an LSN ordered list for efficiency. To make that's efficiently possible upper-level xids are forced to be logged before nested subtransaction xids. That only works though if these records are all looked at: Unfortunately we didn't do so for e.g. row level locks, which are otherwise uninteresting for logical decoding. This could lead to errors like: "ERROR: subxact logged without previous toplevel record". It's not sufficient to just look at row locking records, the xid could appear first due to a lot of other types of records (which will trigger the transaction to be marked logged with MarkCurrentTransactionIdLoggedIfAny). So invent infrastructure to tell reorderbuffer about xids seen, when they'd otherwise not pass through reorderbuffer.c. Reported-By: Jarred Ward Bug: #13844 Discussion: 20160105033249.1087.66040@wrigleys.postgresql.org Backpatch: 9.4, where logical decoding was added
* Force synchronous_commit=on in test_decoding's concurrent_ddl_dml.spec.Andres Freund2016-03-03
| | | | | | | | Otherwise running installcheck-force on a server with synchronous_commit=off will result in the tests failing. All the other tests already do so... Backpatch: 9.4, where logical decoding was added
* logical decoding: fix decoding of a commit's commit time.Andres Freund2016-03-02
| | | | | | | | | | | | | | | | | | When adding replication origins in 5aa235042, I somehow managed to set the timestamp of decoded transactions to InvalidXLogRecptr when decoding one made without a replication origin. Fix that, and the wrong type of the new commit_time variable. This didn't trigger a regression test failure because we explicitly don't show commit timestamps in the regression tests, as they obviously are variable. Add a test that checks that a decoded commit's timestamp is within minutes of NOW() from before the commit. Reported-By: Weiping Qu Diagnosed-By: Artur Zakirov Discussion: 56D4197E.9050706@informatik.uni-kl.de, 56D42918.1010108@postgrespro.ru Backpatch: 9.5, where 5aa235042 originates.
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Message style improvementsPeter Eisentraut2015-10-28
| | | | | Message style, plurals, quoting, spelling, consistency with similar messages
* Improve errhint() about replication slot naming restrictions.Andres Freund2015-10-03
| | | | | | | | | | The existing hint talked about "may only contain letters", but the actual requirement is more strict: only lower case letters are allowed. Reported-By: Rushabh Lathia Author: Rushabh Lathia Discussion: AGPqQf2x50qcwbYOBKzb4x75sO_V3g81ZsA8+Ji9iN5t_khFhQ@mail.gmail.com Backpatch: 9.4-, where replication slots were added
* test_decoding: Protect against rare spurious test failures.Andres Freund2015-09-22
| | | | | | | | | | A bunch of tests missed specifying that empty transactions shouldn't be displayed. That causes problems when e.g. autovacuum runs in an unfortunate moment. The tests in question only run for a very short time, making this quite unlikely. Reported-By: Buildfarm member axolotl Backpatch: 9.4, where logical decoding was introduced
* Remove bogus step from test_decoding isolation tests.Robert Haas2015-08-14
| | | | | | | | | | Commit 43b4a16817c8b5568cec72f3b0e1c8209f5ac7f7 made the isolation tester reject duplicate step names, and it turns out that the test_decoding module's concurrent_ddl_dml isolation test has a duplicate name. I think the second definition isn't actually getting used, so just remove it. Per buildfarm.
* Add confirmed_flush column to pg_replication_slots.Andres Freund2015-08-10
| | | | | | | | | | | | | There's no reason not to expose both restart_lsn and confirmed_flush since they have rather distinct meanings. The former is the oldest WAL still required and valid for both physical and logical slots, whereas the latter is the location up to which a logical slot's consumer has confirmed receiving data. Most of the time a slot will require older WAL (i.e. restart_lsn) than the confirmed position (i.e. confirmed_flush_lsn). Author: Marko Tiikkaja, editorialized by me Discussion: 559D110B.1020109@joh.to
* Fix test_decoding's handling of nonexistant columns in old tuple versions.Andres Freund2015-06-27
| | | | | | | | | | | | | | | test_decoding used fastgetattr() to extract column values. That's wrong when decoding updates and deletes if a table's replica identity is set to FULL and new columns have been added since the old version of the tuple was created. Due to the lack of a crosscheck with the datum's natts values an invalid value will be output, leading to errors or worse. Bug: #13470 Reported-By: Krzysztof Kotlarski Discussion: 20150626100333.3874.90852@wrigleys.postgresql.org Backpatch to 9.4, where the feature, including the bug, was added.
* pgindent run for 9.5Bruce Momjian2015-05-23
|
* Collection of typo fixes.Heikki Linnakangas2015-05-20
| | | | | | | | | | | | | | | Use "a" and "an" correctly, mostly in comments. Two error messages were also fixed (they were just elogs, so no translation work required). Two function comments in pg_proc.h were also fixed. Etsuro Fujita reported one of these, but I found a lot more with grep. Also fix a few other typos spotted while grepping for the a/an typos. For example, "consists out of ..." -> "consists of ...". Plus a "though"/ "through" mixup reported by Euler Taveira. Many of these typos were in old code, which would be nice to backpatch to make future backpatching easier. But much of the code was new, and I didn't feel like crafting separate patches for each branch. So no backpatching.
* Replace some appendStringInfo* calls with more appropriate variantsPeter Eisentraut2015-05-11
| | | | Author: David Rowley <dgrowleyml@gmail.com>
* Remove dependency on ordering in logical decoding upsert test.Andres Freund2015-05-08
| | | | | | Buildfarm member magpie sorted the output differently than intended by Peter. "Resolve" the problem by simply not aggregating, it's not that many lines.
* Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.Andres Freund2015-05-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The newly added ON CONFLICT clause allows to specify an alternative to raising a unique or exclusion constraint violation error when inserting. ON CONFLICT refers to constraints that can either be specified using a inference clause (by specifying the columns of a unique constraint) or by naming a unique or exclusion constraint. DO NOTHING avoids the constraint violation, without touching the pre-existing row. DO UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has access to both the tuple proposed for insertion and the existing tuple; the optional WHERE clause can be used to prevent an update from being executed. The UPDATE SET and WHERE clauses have access to the tuple proposed for insertion using the "magic" EXCLUDED alias, and to the pre-existing tuple using the table name or its alias. This feature is often referred to as upsert. This is implemented using a new infrastructure called "speculative insertion". It is an optimistic variant of regular insertion that first does a pre-check for existing tuples and then attempts an insert. If a violating tuple was inserted concurrently, the speculatively inserted tuple is deleted and a new attempt is made. If the pre-check finds a matching tuple the alternative DO NOTHING or DO UPDATE action is taken. If the insertion succeeds without detecting a conflict, the tuple is deemed inserted. To handle the possible ambiguity between the excluded alias and a table named excluded, and for convenience with long relation names, INSERT INTO now can alias its target table. Bumps catversion as stored rules change. Author: Peter Geoghegan, with significant contributions from Heikki Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes. Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.
* Fix parallel make risk with new check temp-install setupPeter Eisentraut2015-04-29
| | | | | | | | | | | | | | | The "check" target no longer needs to depend on "all", because it now runs "install" directly, which in turn depends on "all". Doing both will cause problems with parallel make, because two builds will run next to each other. Also remove the redirection of the temp-install output into a log file. This was appropriate when this was done from within pg_regress, but now it's just a regular make run, and especially with the above changes this will now take the place of running the "all" target before the test suites. problem report by Jeff Janes, patch in part by Michael Paquier
* Introduce replication progress tracking infrastructure.Andres Freund2015-04-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When implementing a replication solution ontop of logical decoding, two related problems exist: * How to safely keep track of replication progress * How to change replication behavior, based on the origin of a row; e.g. to avoid loops in bi-directional replication setups The solution to these problems, as implemented here, consist out of three parts: 1) 'replication origins', which identify nodes in a replication setup. 2) 'replication progress tracking', which remembers, for each replication origin, how far replay has progressed in a efficient and crash safe manner. 3) The ability to filter out changes performed on the behest of a replication origin during logical decoding; this allows complex replication topologies. E.g. by filtering all replayed changes out. Most of this could also be implemented in "userspace", e.g. by inserting additional rows contain origin information, but that ends up being much less efficient and more complicated. We don't want to require various replication solutions to reimplement logic for this independently. The infrastructure is intended to be generic enough to be reusable. This infrastructure also replaces the 'nodeid' infrastructure of commit timestamps. It is intended to provide all the former capabilities, except that there's only 2^16 different origins; but now they integrate with logical decoding. Additionally more functionality is accessible via SQL. Since the commit timestamp infrastructure has also been introduced in 9.5 (commit 73c986add) changing the API is not a problem. For now the number of origins for which the replication progress can be tracked simultaneously is determined by the max_replication_slots GUC. That GUC is not a perfect match to configure this, but there doesn't seem to be sufficient reason to introduce a separate new one. Bumps both catversion and wal page magic. Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer Discussion: 20150216002155.GI15326@awork2.anarazel.de, 20140923182422.GA15776@alap3.anarazel.de, 20131114172632.GE7522@alap2.anarazel.de
* Improve speed of make check-worldPeter Eisentraut2015-04-23
| | | | | | | | | | | | | | | Before, make check-world would create a new temporary installation for each test suite, which is slow and wasteful. Instead, we now create one test installation that is used by all test suites that are part of a make run. The management of the temporary installation is removed from pg_regress and handled in the makefiles. This allows for better control, and unifies the code with that of test suites not run through pg_regress. review and msvc support by Michael Paquier <michael.paquier@gmail.com> more review by Fabien Coelho <coelho@cri.ensmp.fr>
* Rename pg_replication_slot's new active_in to active_pid.Andres Freund2015-04-22
| | | | | | | In d811c037ce active_in was added but discussion since showed that active_pid is preferred as a name. Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
* Add 'active_in' column to pg_replication_slots.Andres Freund2015-04-21
| | | | | | | | | | | | | | Right now it is visible whether a replication slot is active in any session, but not in which. Adding the active_in column, containing the pid of the backend having acquired the slot, makes it much easier to associate pg_replication_slots entries with the corresponding pg_stat_replication/pg_stat_activity row. This should have been done from the start, but I (Andres) dropped the ball there somehow. Author: Craig Ringer, revised by me Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
* Update copyright for 2015Bruce Momjian2015-01-06
| | | | Backpatch certain files through 9.0
* Don't skip SQL backends in logical decoding for visibility computation.Andres Freund2014-12-02
| | | | | | | | | | | | | | | | | | The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag PGXACT flag, that allows such backends to be skipped when computing the xmin horizon/snapshots. That's fine and sensible for walsenders streaming out logical changes, but not at all fine for SQL backends doing logical decoding. If the latter set that flag any change they have performed outside of logical decoding will not be regarded as visible - which e.g. can lead to that change being vacuumed away. Note that not setting the flag for SQL backends isn't particularly bothersome - the SQL backend doesn't do streaming, so it only runs for a limited amount of time. Per buildfarm member 'tick' and Alvaro. Backpatch to 9.4, where logical decoding was introduced.
* Fix and improve cache invalidation logic for logical decoding.Andres Freund2014-11-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are basically three situations in which logical decoding needs to perform cache invalidation. During/After replaying a transaction with catalog changes, when skipping a uninteresting transaction that performed catalog changes and when erroring out while replaying a transaction. Unfortunately these three cases were all done slightly differently - partially because 8de3e410fa, which greatly simplifies matters, got committed in the midst of the development of logical decoding. The actually problematic case was when logical decoding skipped transaction commits (and thus processed invalidations). When used via the SQL interface cache invalidation could access the catalog - bad, because we didn't set up enough state to allow that correctly. It'd not be hard to setup sufficient state, but the simpler solution is to always perform cache invalidation outside a valid transaction. Also make the different cache invalidation cases look as similar as possible, to ease code review. This fixes the assertion failure reported by Antonin Houska in 53EE02D9.7040702@gmail.com. The presented testcase has been expanded into a regression test. Backpatch to 9.4, where logical decoding was introduced.
* Fix several weaknesses in slot and logical replication on-disk serialization.Andres Freund2014-11-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and snapbuild.c were missing the FIN_CRC32 call when computing/checking checksums of on disk files. That doesn't lower the the error detection capabilities of the checksum, but is inconsistent with other usages. In a followup mail Heikki also noticed that, contrary to a comment, the 'version' and 'length' struct fields of replication slot's on disk data where not covered by the checksum. That's not likely to lead to actually missed corruption as those fields are cross checked with the expected version and the actual file length. But it's wrong nonetheless. As fixing these issues makes existing on disk files unreadable, bump the expected versions of on disk files for both slots and logical decoding historic catalog snapshots. This means that loading old files will fail with ERROR: "replication slot file ... has unsupported version 1" and ERROR: "snapbuild state file ... has unsupported version 1 instead of 2" respectively. Given the low likelihood of anybody already using these new features in a production setup that seems acceptable. Fixing these issues made me notice that there's no regression test covering the loading of historic snapshot from disk - so add one. Backpatch to 9.4 where these features were introduced.
* Improve documentation about binary/textual output mode for output plugins.Andres Freund2014-10-01
| | | | | | | | | | | Also improve related error message as it contributed to the confusion. Discussion: CAB7nPqQrqFzjqCjxu4GZzTrD9kpj6HMn9G5aOOMwt1WZ8NfqeA@mail.gmail.com, CAB7nPqQXc_+g95zWnqaa=mVQ4d3BVRs6T41frcEYi2ocUrR3+A@mail.gmail.com Per discussion between Michael Paquier, Robert Haas and Andres Freund Backpatch to 9.4 where logical decoding was introduced.
* Assorted message fixes and improvementsPeter Eisentraut2014-09-05
|