aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix segfault in ALTER PUBLICATION/SUBSCRIPTION RENAMEPeter Eisentraut2017-03-07
| | | | | From: Masahiko Sawada <sawada.mshk@gmail.com> Reported-by: Fujii Masao <masao.fujii@gmail.com>
* hash: Refactor hash index creation.Robert Haas2017-03-07
| | | | | | | | | | | The primary goal here is to move all of the related page modifications to a single section of code, in preparation for adding write-ahead logging. In passing, rename _hash_metapinit to _hash_init, since it initializes more than just the metapage. Amit Kapila. The larger patch series of which this is a part has been reviewed and tested by Álvaro Herrera, Ashutosh Sharma, Mark Kirkwood, Jeff Janes, and Jesper Pedersen.
* Improve postgresql.conf.sample comments about parallel workers.Robert Haas2017-03-07
| | | | | | David Rowley, reviewed by Amit Kapila Discussion: http://postgr.es/m/CAKJS1f8gPEUPscj6kSqpveMnnx9_3ZypzwsKstv+8atx6VmjBg@mail.gmail.com
* Properly initialize variable.Robert Haas2017-03-07
| | | | | | Commit 3bc7dafa9bebbdaa1bbf0da0798d29a8bdaf6a8f forgot to do this. Noted while experimenting with valgrind.
* Invent start_proc parameters for PL/Tcl.Tom Lane2017-03-07
| | | | | | | | | | | | | | | | | Define GUCs pltcl.start_proc and pltclu.start_proc. When set to a nonempty value at the time a new Tcl interpreter is created, the parameterless pltcl or pltclu function named by the GUC is called to allow user-controlled initialization to occur within the interpreter. This is modeled on plv8's start_proc parameter, and also has much in common with plperl's on_init feature. It allows users to fully replace the "modules" feature that was removed in commit 817f2a586. Since an initializer function could subvert later Tcl code in nearly arbitrary ways, mark both GUCs as SUSET for now. It would be nice to find a way to relax that someday; but the corresponding GUCs in plperl are also SUSET, and there's not been much complaint. Discussion: https://postgr.es/m/22067.1488046447@sss.pgh.pa.us
* Clean up test_ifaddrs a bit.Tom Lane2017-03-07
| | | | | | | | | | | | | | | | | We customarily #include <netinet/in.h> before <arpa/inet.h>; according to our git history (cf commit 527f8babc) there used to be platform(s) where <arpa/inet.h> didn't compile otherwise. That's probably not really an issue anymore, but since test_ifaddrs.c is the one and only place in our code that's not following that rule, bring it into line. Also remove #include <sys/socket.h>, as that's duplicative given that libpq/ifaddr.h does so (via pqcomm.h). In passing, add a .gitignore file so nobody accidentally commits the test_ifaddrs executable, as I nearly did. I see no particular need to back-patch this, as it's just neatnik-ism considering we don't build test_ifaddrs by default, or even document it anywhere.
* A collection of small fixes for the SCRAM patch.Heikki Linnakangas2017-03-07
| | | | | | | | | | | | | | | * Add required #includes for htonl. Per buildfarm members pademelon/gaur. * Remove unnecessary "#include <utils/memutils>". * Fix checking for empty string in pg_SASL_init. (Reported by Peter Eisentraut and his compiler) * Move code in pg_SASL_init to match the recent changes (commit ba005f193d) to pg_fe_sendauth() function, where it's copied from. * Return value of malloc() was not checked for NULL in scram_SaltedPassword(). Fix by avoiding the malloc().
* Consider parallel merge joins.Robert Haas2017-03-07
| | | | | | | | | | | | Commit 45be99f8cd5d606086e0a458c9c72910ba8a613d took the position that performing a merge join in parallel was not likely to work out well, but this conclusion was greeted with skepticism even at the time. Whether it was true then or not, it's clearly not true any more now that we have parallel index scan. Dilip Kumar, reviewed by Amit Kapila and by me. Discussion: http://postgr.es/m/CAFiTN-v3=cM6nyFwFGp0fmvY4=kk79Hq9Fgu0u8CSJ-EEq1Tiw@mail.gmail.com
* Fix pgbench's failure to honor the documented long-form option "--builtin".Tom Lane2017-03-07
| | | | | | | | | | | | | Not only did it not accept --builtin as a synonym for -b, but what it did accept as a synonym was --tpc-b (huh?), which it got even further wrong by marking as no_argument, so that if you did try that you got a core dump. I suppose this is leftover from some early design for the new switches added by commit 8bea3d221, but it's still pretty sloppy work. Per bug #14580 from Stepan Pesternikov. Back-patch to 9.6 where the error was introduced. Report: https://postgr.es/m/20170307123347.25054.73207@wrigleys.postgresql.org
* Give partitioned table "p" in regression tests a less generic name.Robert Haas2017-03-07
| | | | | | | | | And don't drop it, so that we improve the coverage of the pg_upgrade regression tests. Amit Langote, per a gripe from Tom Lane Discussion: http://postgr.es/m/9071.1488863082@sss.pgh.pa.us
* Fix relcache reference leak.Robert Haas2017-03-07
| | | | | | | Reported by Kevin Grittner. Faulty commit identified by Tom Lane. Patch by Amit Langote, reviewed by Michael Paquier. Discussion: http://postgr.es/m/CACjxUsOHbH1=99u8mGxmLHfy5hov4ENEpvM6=3ARjos7wG7rtQ@mail.gmail.com
* Fix wrong word in comment.Robert Haas2017-03-07
| | | | Third time's the charm.
* Remove vestigial grammar support for CHARACTER ... CHARACTER SET option.Tom Lane2017-03-07
| | | | | | | | | | | | | | | | | The SQL standard says that you should be able to write "CHARACTER SET foo" as part of the declaration of a char-type column. We don't implement that, but a rough form of support has existed in gram.y since commit f10b63923. That's now sat there for nigh 20 years without anyone fleshing it out --- and even if someone did, the contemplated approach of having separate data type name(s) for every character set certainly isn't what we'd do today. Let's just remove the grammar production; if anyone is ever motivated to work on this, reinventing the grammar support is a trivial fraction of what they'd have to do. And we've never documented anything about supporting such a clause. Per gripe from Neha Khatri. Discussion: https://postgr.es/m/CAFO0U+-iOS5oYN5v3SBuZvfhPUTRrkDFEx8w7H17B07Rwg3YUA@mail.gmail.com
* Preparatory refactoring for parallel merge join support.Robert Haas2017-03-07
| | | | | | | | | | | | | | | Extract the logic used by hash_inner_and_outer into a separate function, get_cheapest_parallel_safe_total_inner, so that it can also be used to plan parallel merge joins. Also, add a require_parallel_safe argument to the existing function get_cheapest_path_for_pathkeys, because parallel merge join needs to find the cheapest path for a given set of pathkeys that is parallel-safe, not just the cheapest one overall. Patch by me, reviewed by Dilip Kumar. Discussion: http://postgr.es/m/CA+TgmoYOv+dFK0MWW6366dFj_xTnohQfoBDrHyB7d1oZhrgPjA@mail.gmail.com
* Fix parallel hash join path search.Robert Haas2017-03-07
| | | | | | | | | | | | When the very cheapest path is not parallel-safe, we want to instead use the cheapest unparameterized path that is. The old code searched innerrel->cheapest_parameterized_paths, but that isn't right, because the path we want may not be in that list. Search innerrel->pathlist instead. Spotted by Dilip Kumar. Discussion: http://postgr.es/m/CAFiTN-szCEcZrQm0i_w4xqSaRUTOUFstNu32Zn4rxxDcoa8gnA@mail.gmail.com
* psql: Add \gx commandStephen Frost2017-03-07
| | | | | | | | | | | | | | | | | It can often be useful to use expanded mode output (\x) for just a single query. Introduce a \gx which acts exactly like \g except that it will force expanded output mode for that one \gx call. This is simpler than having to use \x as a toggle and also means that the user doesn't have to worry about the current state of the expanded variable, or resetting it later, to ensure a given query is always returned in expanded mode. Primairly Christoph's patch, though I did tweak the documentation and help text a bit, and re-indented the tab completion section. Author: Christoph Berg Reviewed By: Daniel Verite Discussion: https://postgr.es/m/20170127132737.6skslelaf4txs6iw%40msg.credativ.de
* Allow pg_dumpall to dump roles w/o user passwordsSimon Riggs2017-03-07
| | | | | | | | | Add new option --no-role-passwords which dumps roles without passwords. Since we don’t need passwords, we choose to use pg_roles in preference to pg_authid since access may be restricted for security reasons in some configrations. Robins Tharakan and Simon Riggs
* Fix comments in SCRAM-SHA-256 patch.Heikki Linnakangas2017-03-07
| | | | Amit Kapila.
* Ensure ThisTimeLineID is valid before START_REPLICATIONSimon Riggs2017-03-07
| | | | Craig Ringer
* Add regression tests for passwords.Heikki Linnakangas2017-03-07
| | | | Michael Paquier.
* Support SCRAM-SHA-256 authentication (RFC 5802 and 7677).Heikki Linnakangas2017-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces a new generic SASL authentication method, similar to the GSS and SSPI methods. The server first tells the client which SASL authentication mechanism to use, and then the mechanism-specific SASL messages are exchanged in AuthenticationSASLcontinue and PasswordMessage messages. Only SCRAM-SHA-256 is supported at the moment, but this allows adding more SASL mechanisms in the future, without changing the overall protocol. Support for channel binding, aka SCRAM-SHA-256-PLUS is left for later. The SASLPrep algorithm, for pre-processing the password, is not yet implemented. That could cause trouble, if you use a password with non-ASCII characters, and a client library that does implement SASLprep. That will hopefully be added later. Authorization identities, as specified in the SCRAM-SHA-256 specification, are ignored. SET SESSION AUTHORIZATION provides more or less the same functionality, anyway. If a user doesn't exist, perform a "mock" authentication, by constructing an authentic-looking challenge on the fly. The challenge is derived from a new system-wide random value, "mock authentication nonce", which is created at initdb, and stored in the control file. We go through these motions, in order to not give away the information on whether the user exists, to unauthenticated users. Bumps PG_CONTROL_VERSION, because of the new field in control file. Patch by Michael Paquier and Heikki Linnakangas, reviewed at different stages by Robert Haas, Stephen Frost, David Steele, Aleksander Alekseev, and many others. Discussion: https://www.postgresql.org/message-id/CAB7nPqRbR3GmFYdedCAhzukfKrgBLTLtMvENOmPrVWREsZkF8g%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAB7nPqSMXU35g%3DW9X74HVeQp0uvgJxvYOuA4A-A3M%2B0wfEBv-w%40mail.gmail.com Discussion: https://www.postgresql.org/message-id/55192AFE.6080106@iki.fi
* Refactor SHA2 functions and move them to src/common/.Heikki Linnakangas2017-03-07
| | | | | | | | | | | | | This way both frontend and backends can use them. The functions are taken from pgcrypto, which now fetches the source files it needs from src/common/. A new interface is designed for the SHA2 functions, which allow linking to either OpenSSL or the in-core stuff taken from KAME as needed. Michael Paquier, reviewed by Robert Haas. Discussion: https://www.postgresql.org/message-id/CAB7nPqTGKuTM5jiZriHrNaQeVqp5e_iT3X4BFLWY_HyHxLvySQ%40mail.gmail.com
* pg_dump: Properly handle public schema ACLs with --cleanStephen Frost2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump has always handled the public schema in a special way when it comes to the "--clean" option. To wit, we do not drop or recreate the public schema in "normal" mode, but when we are run in "--clean" mode then we do drop and recreate the public schema. When running in "--clean" mode, the public schema is dropped and then recreated and it is recreated with the normal schema-default privileges of "nothing". This is unlike how the public schema starts life, which is to have CREATE and USAGE GRANT'd to the PUBLIC role, and that is what is recorded in pg_init_privs. Due to this, in "--clean" mode, pg_dump would mistakenly only dump out the set of privileges required to go from the initdb-time privileges on the public schema to whatever the current-state privileges are. If the privileges were not changed from initdb time, then no privileges would be dumped out for the public schema, but with the schema being dropped and recreated, the result was that the public schema would have no ACLs on it instead of what it should have, which is the initdb-time privileges. Practically speaking, this meant that pg_dump with --clean mode dumping a database where the ACLs on the public schema were not changed from the default would, upon restore, result in a public schema with *no* privileges GRANT'd, not matching the state of the existing database (where the initdb-time privileges would have been CREATE and USAGE to the PUBLIC role for the public schema). To fix, adjust the query in getNamespaces() to ignore the pg_init_privs entry for the public schema when running in "--clean" mode, meaning that the privileges for the public schema would be dumped, correctly, as if it was going from a newly-created schema to the current state (which is, indeed, what will happen during the restore thanks to the DROP/CREATE). Only the public schema is handled in this special way by pg_dump, no other initdb-time objects are dropped/recreated in --clean mode. Back-patch to 9.6 where the bug was introduced. Discussion: https://postgr.es/m/3534542.o3cNaKiDID%40techfox
* Repair incorrect pg_dump labeling for some comments and security labels.Tom Lane2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We attached no schema label to comments for procedural languages, casts, transforms, operator classes, operator families, or text search objects. The first three categories of objects don't really have schemas, but pg_dump treats them as if they do, and it seems like the TocEntry fields for their comments had better match the TocEntry fields for the parent objects. (As an example of a possible hazard, the type names in a CAST will be formatted with the assumption of a particular search_path, so failing to ensure that this same path is active for the COMMENT ON command could lead to an error or to attaching the comment to the wrong cast.) In the last six cases, this was a flat-out error --- possibly mine to begin with, but it was a long time ago. The security label for a procedural language was likewise not correctly labeled as to schema, and both the comment and security label for a procedural language were not correctly labeled as to owner. In simple cases the restore would accidentally work correctly anyway, since these comments and security labels would normally get emitted right after the owning object, and so the search path and active user would be correct anyhow. But it could fail in corner cases; for example a schema-selective restore would omit comments it should include. Giuseppe Broccolo noted the oversight, and proposed the correct fix, for text search dictionary objects; I found the rest by cross-checking other dumpComment() calls. These oversights are ancient, so back-patch all the way. Discussion: https://postgr.es/m/CAFzmHiWwwzLjzwM4x5ki5s_PDMR6NrkipZkjNnO3B0xEpBgJaA@mail.gmail.com
* Make simplehash.h grow hashtable in additional cases.Andres Freund2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Increase the size when either the distance between actual and optimal slot grows too large, or when too many subsequent entries would have to be moved. This addresses reports that the simplehash performed, sometimes considerably, worse than dynahash. The reason turned out to be that insertions into the hashtable where, due to the use of parallel query, in effect done from another hashtable, in hash-value order. If the target hashtable, due to mis-estimation, was sized a lot smaller than the source table(s) that lead to very imbalanced tables; a lot of entries in many close-by buckets from the source tables were inserted into a single, wider, bucket on the target table. As the growth factor was solely computed based on the fillfactor, the performance of the table decreased further and further. b81b5a96f424531b was an attempt to address this problem for hash aggregates (but not for bitmap scans), but it turns out that the current method of mixing hash values often actually leaves neighboring hash-values close to each other, just in different value range. It might be worth revisiting that independently of the performance issues addressed in this patch.. To address that problem resize tables in two additional cases: Firstly when the optimal position for an entry would be far from the actual position, secondly when many entries would have to be moved to make space for the new entry (while satisfying the robin hood property). Due to the additional resizing threshold it seems possible, and testing confirms that so far, that a higher fillfactor doesn't hurt performance and saves a bit of memory. It seems better to increase it now, before a release containing any of this code, rather than wonder in some later release. The various boundaries aren't determined in a particularly scientific manner, they might need some fine-tuning. In all my tests the new code now, even with parallelism, performs at least as good as the old code, in several scenarios significantly better. Reported-By: Dilip Kumar, Robert Haas, Kuntal Ghosh Discussion: https://postgr.es/m/CAFiTN-vagvuAydKG9VnWcoK=ADAhxmOa4ZTrmNsViBBooTnriQ@mail.gmail.com https://postgr.es/m/CAGz5QC+=fNTYgzMLTBUNeKt6uaWZFXJbkB5+7oWm-n9DwVxcLA@mail.gmail.com
* pg_upgrade: Fix large object COMMENTS, SECURITY LABELSStephen Frost2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When performing a pg_upgrade, we copy the files behind pg_largeobject and pg_largeobject_metadata, allowing us to avoid having to dump out and reload the actual data for large objects and their ACLs. Unfortunately, that isn't all of the information which can be associated with large objects. Currently, we also support COMMENTs and SECURITY LABELs with large objects and these were being silently dropped during a pg_upgrade as pg_dump would skip everything having to do with a large object and pg_upgrade only copied the tables mentioned to the new cluster. As the file copies happen after the catalog dump and reload, we can't simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode output but we also have to include the actual large object definition as well. With the definition, comments, and security labels in the pg_dump output and the file copies performed by pg_upgrade, all of the data and metadata associated with large objects is able to be successfully pulled forward across a pg_upgrade. In 9.6 and master, we can simply adjust the dump bitmask to indicate which components we don't want. In 9.5 and earlier, we have to put explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL or the data when in binary-upgrade mode. Adjustments made to the privileges regression test to allow another test (large_object.sql) to be added which explicitly leaves a large object with a comment in place to provide coverage of that case with pg_upgrade. Back-patch to all supported branches. Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
* Avoid dangling pointer to relation name in RLS code path in DoCopy().Tom Lane2017-03-06
| | | | | | | | | | | | | | | | | With RLS active, "COPY tab TO ..." failed under -DRELCACHE_FORCE_RELEASE, and would sometimes fail without that, because it used the relation name directly from the relcache as part of the parsetree it's building. That becomes a potentially-dangling pointer as soon as the relcache entry is closed, a bit further down. Typical symptom if the relcache entry chanced to get cleared would be "relation does not exist" error with a garbage relation name, or possibly a core dump; but if you were really truly unlucky, the COPY might copy from the wrong table. Per report from Andrew Dunstan that regression tests fail with -DRELCACHE_FORCE_RELEASE. The core tests now pass for me (but have not tried "make check-world" yet). Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
* Combine several DROP variants into generic DropStmtPeter Eisentraut2017-03-06
| | | | | | | | Combine DROP of FOREIGN DATA WRAPPER, SERVER, POLICY, RULE, and TRIGGER into generic DropStmt grammar. Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Allow dropping multiple functions at oncePeter Eisentraut2017-03-06
| | | | | | | | | | | | The generic drop support already supported dropping multiple objects of the same kind at once. But the previous representation of function signatures across two grammar symbols and structure members made this cumbersome to do for functions, so it was not supported. Now that function signatures are represented by a single structure, it's trivial to add this support. Same for aggregates and operators. Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Replace LookupFuncNameTypeNames() with LookupFuncWithArgs()Peter Eisentraut2017-03-06
| | | | | | | | | | | The old function took function name and function argument list as separate arguments. Now that all function signatures are passed around as ObjectWithArgs structs, this is no longer necessary and can be replaced by a function that takes ObjectWithArgs directly. Similarly for aggregates and operators. Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Remove objname/objargs split for referring to objectsPeter Eisentraut2017-03-06
| | | | | | | | | | | | | | | | | | | | | | | In simpler times, it might have worked to refer to all kinds of objects by a list of name components and an optional argument list. But this doesn't work for all objects, which has resulted in a collection of hacks to place various other nodes types into these fields, which have to be unpacked at the other end. This makes it also weird to represent lists of such things in the grammar, because they would have to be lists of singleton lists, to make the unpacking work consistently. The other problem is that keeping separate name and args fields makes it awkward to deal with lists of functions. Change that by dropping the objargs field and have objname, renamed to object, be a generic Node, which can then be flexibly assigned and managed using the normal Node mechanisms. In many cases it will still be a List of names, in some cases it will be a string Value, for types it will be the existing Typename, for functions it will now use the existing ObjectWithArgs node type. Some of the more obscure object types still use somewhat arbitrary nested lists. Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Add operator_with_argtypes grammar rulePeter Eisentraut2017-03-06
| | | | | | | | | | This makes the handling of operators similar to that of functions and aggregates. Rename node FuncWithArgs to ObjectWithArgs, to reflect the expanded use. Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Use class_args field in opclass_dropPeter Eisentraut2017-03-06
| | | | | | | This makes it consistent with the usage in opclass_item. Reviewed-by: Jim Nasby <Jim.Nasby@BlueTreble.com> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Fix incorrect comments.Robert Haas2017-03-06
| | | | | | | | | Commit 19dc233c32f2900e57b8da4f41c0f662ab42e080 introduced these comments. Michael Paquier noticed that one of them had a typo, but a bigger problem is that they were not an accurate description of what the code was doing. Patch by me.
* Mark pg_start_backup and pg_stop_backup as parallel-restricted.Robert Haas2017-03-06
| | | | | | | | | | | | | | They depend on backend-private state that will not be synchronized by the parallel machinery, so they should not be marked parallel-safe. This issue also exists in 9.6, but we obviously can't do anything about 9.6 clusters that already exist. Possibly this could be back-patched so that future 9.6 clusters would come out OK, or possibly we should back-patch some other fix, but that would need more discussion. David Steele, reviewed by Michael Paquier Discussion: http://postgr.es/m/CA+TgmoYCWfO2UM-t=HUMFJyxJywLDiLL0nAJpx88LKtvBvNECw@mail.gmail.com
* Fix user-after-free bug.Robert Haas2017-03-06
| | | | | | | Introduced by commit aea5d298362e881b13d95a48c5ae116879237389. Patch from Amit Kapila. Issue discovered independently by Amit Kapila and Ashutosh Sharma.
* Reorder the asynchronous libpq calls for replication connectionPeter Eisentraut2017-03-06
| | | | | | | | Per libpq documentation, the initial state must be PGRES_POLLING_WRITING. Failing to do that appears to cause some issues on some Windows systems. From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Reduce lock levels for table storage params related to planningSimon Riggs2017-03-06
| | | | | | | | | | | | The following parameters are now updateable with ShareUpdateExclusiveLock effective_io_concurrency parallel_workers seq_page_cost random_page_cost n_distinct n_distinct_inherited Simon Riggs and Fabrízio Mello
* Allow partitioned tables to be dropped without CASCADESimon Riggs2017-03-06
| | | | | | | | | | Record partitioned table dependencies as DEPENDENCY_AUTO rather than DEPENDENCY_NORMAL, so that DROP TABLE just works. Remove all the tests for partitioned tables where earlier work had deliberately avoided using CASCADE. Amit Langote, reviewed by Ashutosh Bapat and myself
* In rebuild_relation(), don't access an already-closed relcache entry.Tom Lane2017-03-04
| | | | | | | | | | | | | | | | This reliably fails with -DRELCACHE_FORCE_RELEASE, as reported by Andrew Dunstan, and could sometimes fail in normal operation, resulting in a wrong persistence value being used for the transient table. It's not immediately clear to me what effects that might have beyond the risk of a crash while accessing OldHeap->rd_rel->relpersistence, but it's probably not good. Bug introduced by commit f41872d0c, and made substantially worse by commit 85b506bbf, which added a second such access significantly later than the heap_close. I doubt the first reference could fail in a production scenario, but the second one definitely could. Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
* pg_dump: Fix orderingPeter Eisentraut2017-03-04
| | | | | | Materialized views refresh should be last. From: Jim Nasby <Jim.Nasby@BlueTreble.com>
* Disallow CREATE/DROP SUBSCRIPTION in transaction blockPeter Eisentraut2017-03-03
| | | | | | | | Disallow CREATE SUBSCRIPTION and DROP SUBSCRIPTION in a transaction block when the replication slot is to be created or dropped, since that cannot be rolled back. based on patch by Masahiko Sawada <sawada.mshk@gmail.com>
* Fix parsing of DROP SUBSCRIPTION ... DROP SLOTPeter Eisentraut2017-03-03
| | | | | | It didn't actually parse before. Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>
* Fix two recently introduced grammar errors in mmgr/README.Andres Freund2017-03-03
| | | | | | | These were introduced by me in f4e2d50c. Reported-By: Tomas Vondra Discussion: https://postgr.es/m/11adca69-be28-44bc-a801-64e6d53851e3@2ndquadrant.com
* Fix typoPeter Eisentraut2017-03-03
|
* psql: Add tab completion for logical replicationPeter Eisentraut2017-03-03
| | | | | | | | Add tab completion for publications and subscriptions. Also, to be able to get a list of subscriptions, make pg_subscription world-readable but revoke access to subconninfo using column privileges. From: Michael Paquier <michael.paquier@gmail.com>
* Add RENAME support for PUBLICATIONs and SUBSCRIPTIONsPeter Eisentraut2017-03-03
| | | | From: Petr Jelinek <petr.jelinek@2ndquadrant.com>
* Fix after trigger execution in logical replicationPeter Eisentraut2017-03-03
| | | | | From: Petr Jelinek <petr.jelinek@2ndquadrant.com> Tested-by: Thom Brown <thom@linux.com>
* Use asynchronous connect API in libpqwalreceiverPeter Eisentraut2017-03-03
| | | | | | | | | | This makes the connection attempt from CREATE SUBSCRIPTION and from WalReceiver interruptable by the user in case the libpq connection is hanging. The previous coding required immediate shutdown (SIGQUIT) of PostgreSQL in that situation. From: Petr Jelinek <petr.jelinek@2ndquadrant.com> Tested-by: Thom Brown <thom@linux.com>
* Allow vacuums to report oldestxminSimon Riggs2017-03-03
| | | | | | Allow VACUUM and Autovacuum to report the oldestxmin value they used while cleaning tables, helping to make better sense out of the other statistics we report in various cases.