aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Reestablish alignment of pg_controldata output.Joe Conway2015-08-25
| | | | | | | | | | | | Until 9.4, pg_controldata output was all aligned. At some point during 9.5 development, a new item was added, namely "Current track_commit_timestamp setting:" which is two characters too long to be aligned with the rest of the output. Fix this by removing the noise word "Current" and adding the requisite number of padding spaces. Since the six preceding items are also similar in nature, remove "Current" and pad those as well in order to maintain overall consistency. Backpatch to 9.5 where new offending item was added.
* Further tweak wording of error messages about bad CONTINUE/EXIT statements.Tom Lane2015-08-25
| | | | Per discussion, a little more verbosity seems called for.
* Limit the verbosity of memory context statistics dumps.Tom Lane2015-08-25
| | | | | | | | | | | | | | | | | | | | | We had a report from Stefan Kaltenbrunner of a case in which postmaster log files overran available disk space because multiple backends spewed enormous context stats dumps upon hitting an out-of-memory condition. Given the lack of similar reports, this isn't a common problem, but it still seems worth doing something about. However, we don't want to just blindly truncate the output, because that might prevent diagnosis of OOM problems. What seems like a workable compromise is to limit the dump to 100 child contexts per parent, and summarize the space used within any additional child contexts. That should help because practical cases where the dump gets long will typically be huge numbers of siblings under the same parent context; while the additional debugging value from seeing details about individual siblings beyond 100 will not be large, we hope. Anyway it doesn't take much code or memory space to do this, so let's try it like this and see how things go. Since the summarization mechanism requires passing totals back up anyway, I took the opportunity to add a "grand total" line to the end of the printout.
* Fix potential platform dependence in gist regression test.Tom Lane2015-08-25
| | | | | | | | | | | | | | | | | | | The results of the KNN-search test cases were indeterminate, as they asked the system to sort pairs of points that are exactly equidistant from the query reference point. It's a bit surprising that we've seen no platform-specific failures from this in the buildfarm. Perhaps IEEE-float math is well enough standardized that no such failures will ever occur on supported platforms ... but since this entire regression test has yet to be shipped in any non-alpha release, that seems like an unduly optimistic assumption. Tweak the queries so that the correct output is uniquely defined. (The other queries in this test are also underdetermined; but it looks like they are regurgitating index rows in insertion order, so for the moment assume that that behavior is stable enough.) Per Greg Stark's experiments with VAX. Back-patch to 9.5 where this test script was introduced.
* Tweak wording of syntax error messages about bad CONTINUE/EXIT statements.Tom Lane2015-08-23
| | | | Try to avoid any possible confusion about what these messages mean.
* Reduce number of bytes examined by convert_one_string_to_scalar().Tom Lane2015-08-23
| | | | | | | | | | | | | | | | | | | | | Previously, convert_one_string_to_scalar() would examine up to 20 bytes of the input string, producing a scalar conversion with theoretical precision far greater than is of any possible use considering the other limitations on the accuracy of the resulting selectivity estimate. (I think this choice might pre-date the caller-level logic that strips any common prefix of the strings; before that, there could have been value in scanning the strings far enough to use all the precision available in a double.) Aside from wasting cycles to little purpose, this choice meant that the "denom" variable could grow to as much as 256^21 = 3.74e50, which could overflow in some non-IEEE float arithmetics. While we don't really support any machines with non-IEEE arithmetic anymore, this still seems like quite an unnecessary platform dependency. Limit the scan to 12 bytes instead, thus limiting "denom" to 256^13 = 2.03e31, a value more likely to be computable everywhere. Per testing by Greg Stark, which showed overflow failures in our standard regression tests on VAX.
* Avoid use of float arithmetic in bipartite_match.c.Tom Lane2015-08-23
| | | | | | | | | | | | | | | Since the distances used in this algorithm are small integers (not more than the size of the U set, in fact), there is no good reason to use float arithmetic for them. Use short ints instead: they're smaller, faster, and require no special portability assumptions. Per testing by Greg Stark, which disclosed that the code got into an infinite loop on VAX for lack of IEEE-style float infinities. We don't really care all that much whether Postgres can run on a VAX anymore, but there seems sufficient reason to change this code anyway. In passing, make a few other small adjustments to make the code match usual Postgres coding style a bit better.
* Fix typo in C comment.Kevin Grittner2015-08-23
| | | | | Merlin Moncure Backpatch to 9.5, where the misspelling was introduced
* Improve whitespacePeter Eisentraut2015-08-22
|
* Add hint to run "pgbench -i", if test tables don't exist.Heikki Linnakangas2015-08-22
| | | | Fabien Coelho, reviewed by Julien Rouhaud
* Avoid O(N^2) behavior when enlarging SPI tuple table in spi_printtup().Tom Lane2015-08-21
| | | | | | | | | | | | | For no obvious reason, spi_printtup() was coded to enlarge the tuple pointer table by just 256 slots at a time, rather than doubling the size at each reallocation, as is our usual habit. For very large SPI results, this makes for O(N^2) time spent in repalloc(), which of course soon comes to dominate the runtime. Use the standard doubling approach instead. This is a longstanding performance bug, so back-patch to all active branches. Neil Conway
* Detect mismatched CONTINUE and EXIT statements at plpgsql compile time.Tom Lane2015-08-21
| | | | | | | | | | | With a bit of tweaking of the compile namestack data structure, we can verify at compile time whether a CONTINUE or EXIT is legal. This is surely better than leaving it to runtime, both because earlier is better and because we can issue a proper error pointer. Also, we can get rid of the ad-hoc old way of detecting the problem, which only took care of CONTINUE not EXIT. Jim Nasby, adjusted a bit by me
* Clean up roles from roleattributes testStephen Frost2015-08-21
| | | | | | Having the roles remain after the test ends up causing repeated 'make installcheck' runs to fail and may be risky from a security perspective also, so remove them at the end of the test.
* Do not allow *timestamp to be passed as NULLAlvaro Herrera2015-08-21
| | | | | | | | | | | | | The code had bugs that would cause crashes if NULL was passed as that argument (originally intended to mean not to bother returning its value), and after inspection it turns out that nothing seems interested in the case that *ts is NULL anyway. Therefore, remove the partial checks intended to support that case. Author: Michael Paquier though I didn't include a proposed Assert. Backpatch to 9.5.
* Remove ExecGetScanType functionAlvaro Herrera2015-08-21
| | | | This became unused in a191a169d6d0b9558da4519e66510c4540204a51.
* Fix plpython crash when returning string representation of a RECORD result.Tom Lane2015-08-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | PLyString_ToComposite() blithely overwrote proc->result.out.d, even though for a composite result type the other union variant proc->result.out.r is the one that should be valid. This could result in a crash if out.r had in fact been filled in (proc->result.is_rowtype == 1) and then somebody later attempted to use that data; as per bug #13579 from Paweł Michalak. Just to add insult to injury, it didn't work for RECORD results anyway, because record_in() would refuse the case. Fix by doing the I/O function lookup in a local PLyTypeInfo variable, as we were doing already in PLyObject_ToComposite(). This is not a great technique because any fn_extra data allocated by the input function will be leaked permanently (thanks to using TopMemoryContext as fn_mcxt). But that's a pre-existing issue that is much less serious than a crash, so leave it to be fixed separately. This bug would be a potential security issue, except that plpython is only available to superusers and the crash requires coding the function in a way that didn't work before today's patches. Add regression test cases covering all the supported methods of converting composite results. Back-patch to 9.1 where the faulty coding was introduced.
* Allow record_in() and record_recv() to work for transient record types.Tom Lane2015-08-21
| | | | | | | | | | | | | | | | | | If we have the typmod that identifies a registered record type, there's no reason that record_in() should refuse to perform input conversion for it. Now, in direct SQL usage, record_in() will always be passed typmod = -1 with type OID RECORDOID, because no typmodin exists for type RECORD, so the case can't arise. However, some InputFunctionCall users such as PLs may be able to supply the right typmod, so we should allow this to support them. Note: the previous coding and comment here predate commit 59c016aa9f490b53. There has been no case since 8.1 in which the passed type OID wouldn't be valid; and if it weren't, this error message wouldn't be apropos anyway. Better to let lookup_rowtype_tupdesc complain about it. Back-patch to 9.1, as this is necessary for my upcoming plpython fix. I'm committing it separately just to make it a bit more visible in the commit history.
* Rename 'cmd' to 'cmd_name' in CreatePolicyStmtStephen Frost2015-08-21
| | | | | | | | | | | To avoid confusion, rename CreatePolicyStmt's 'cmd' to 'cmd_name', parse_policy_command's 'cmd' to 'polcmd', and AlterPolicy's 'cmd_datum' to 'polcmd_datum', per discussion with Noah and as a follow-up to his correction of copynodes/equalnodes handling of the CreatePolicyStmt 'cmd' field. Back-patch to 9.5 where the CreatePolicyStmt was introduced, as we are still only in alpha.
* In AlterRole, make bypassrls an intStephen Frost2015-08-21
| | | | | | | | | | | | | | | | When reworking bypassrls in AlterRole to operate the same way the other attribute handling is done, I missed that the variable was incorrectly a bool rather than an int. This meant that on platforms with an unsigned char, we could end up with incorrect behavior during ALTER ROLE. Pointed out by Andres thanks to tests he did changing our bool to be the one from stdbool.h which showed this and a number of other issues. Add regression tests to test CREATE/ALTER role for the various role attributes. Arrange to leave roles behind for testing pg_dumpall, but none which have the LOGIN attribute. Back-patch to 9.5 where the AlterRole bug exists.
* Fix bug in calculations of hash join buckets.Kevin Grittner2015-08-19
| | | | | | | | | | | | | | Commit 8cce08f168481c5fc5be4e7e29b968e314f1b41e used a left-shift on a literal of 1 that could (in large allocations) be shifted by 31 or more bits. This was assigned to a local variable that was already declared to be a long to protect against overruns of int, but the literal in this shift needs to be declared long to allow it to work correctly in some compilers. Backpatch to 9.5, where the bug was introduced. Report and patch by KaiGai Kohei, slighly modified based on discussion.
* Fix a few bogus statement type names in plpgsql error messages.Tom Lane2015-08-18
| | | | | | | | | | | | | | | | | | plpgsql's error location context messages ("PL/pgSQL function fn-name line line-no at stmt-type") would misreport a CONTINUE statement as being an EXIT, and misreport a MOVE statement as being a FETCH. These are clear bugs that have been there a long time, so back-patch to all supported branches. In addition, in 9.5 and HEAD, change the description of EXECUTE from "EXECUTE statement" to just plain EXECUTE; there seems no good reason why this statement type should be described differently from others that have a well-defined head keyword. And distinguish GET STACKED DIAGNOSTICS from plain GET DIAGNOSTICS. These are a bit more of a judgment call, and also affect existing regression-test outputs, so I did not back-patch into stable branches. Pavel Stehule and Tom Lane
* psql: Make EXECUTE PROCEDURE tab completion a bit narrower.Robert Haas2015-08-18
| | | | | | | If the user has typed GRANT EXECUTE, the correct completion is "ON", not "PROCEDURE". Daniel Verite
* Fix performance bug from conflict between two previous improvements.Tom Lane2015-08-17
| | | | | | | | | | | | | | | | | My expanded-objects patch (commit 1dc5ebc9077ab742) included code to make plpgsql pass expanded-object variables as R/W pointers to certain functions that are trusted for modifying such variables in-place. However, that optimization got broken by commit 6c82d8d1fdb1f126, which arranged to share a single ParamListInfo across most expressions evaluated by a plpgsql function. We don't want a R/W pointer to be passed to other functions just because we decided one function was safe! Fortunately, the breakage was in the other direction, of never passing a R/W pointer at all, because we'd always have pre-initialized the shared array slot with a R/O pointer. So it was still functionally correct, but we were back to O(N^2) performance for repeated use of "a := a || x". To fix, force an unshared param array to be used when the R/W param optimization is active. Commit 6c82d8d1fdb1f126 is in HEAD only, so no need for a back-patch.
* Fix reporting of skipped transactions in pgbench.Heikki Linnakangas2015-08-17
| | | | | | Broken by commit 1bc90f7a. Fabien Coelho.
* Repair unsafe use of shared typecast-lookup table in plpgsql DO blocks.Tom Lane2015-08-15
| | | | | | | | | | | | | | | | DO blocks use private simple_eval_estates to avoid intra-transaction memory leakage, cf commit c7b849a89. I had forgotten about that while writing commit 0fc94a5ba, but it means that expression execution trees created within a DO block disappear immediately on exiting the DO block, and hence can't safely be linked into plpgsql's session-wide cast hash table. To fix, give a DO block a private cast hash table to go with its private simple_eval_estate. This is less efficient than one could wish, since DO blocks can no longer share any cast lookup work with other plpgsql execution, but it shouldn't be too bad; in any case it's no worse than what happened in DO blocks before commit 0fc94a5ba. Per bug #13571 from Feike Steenbergen. Preliminary analysis by Oleksandr Shulgin.
* Don't use function definitions looking like old-style ones.Andres Freund2015-08-15
| | | | | | | This fixes a bunch of somewhat pedantic warnings with new compilers. Since by far the majority of other functions definitions use the (void) style it just seems to be consistent to do so as well in the remaining few places.
* Correct type of waitMode variable in ExecInsertIndexTuples().Andres Freund2015-08-15
| | | | | | | | | It was a bool, even though it should be CEOUC_WAIT_MODE. That's unlikely to have a negative effect with the current definition of bool (char), but it's definitely wrong. Discussion: 20150812084351.GD8470@awork2.anarazel.de Backpatch: 9.5, where ON CONFLICT was merged
* vacuumdb: Don't assign negative values to a boolean.Andres Freund2015-08-15
| | | | | | | | | | Since a17923204736 (vacuumdb: enable parallel mode) -1 has been assigned to a boolean. That can, justifiedly, trigger compiler warnings. There's also no need for ternary logic, result was only ever set to 0 or -1. So don't. Discussion: 20150812084351.GD8470@awork2.anarazel.de Backpatch: 9.5
* Don't use 'bool' as a struct member name in help_config.c.Andres Freund2015-08-15
| | | | | | | | | | | | Doing so doesn't work if bool is a macro rather than a typedef. Although c.h spends some effort to support configurations where bool is a preexisting macro, help_config.c has existed this way since 2003 (b700a6), and there have not been any reports of problems. Backpatch anyway since this is as riskless as it gets. Discussion: 20150812084351.GD8470@awork2.anarazel.de Backpatch: 9.0-master
* Use the correct type for TableInfo->relreplident.Andres Freund2015-08-15
| | | | | | | | Mistakenly relreplident was stored as a bool. That works today as c.h typedefs bool to a char, but isn't very future proof. Discussion: 20150812084351.GD8470@awork2.anarazel.de Backpatch: 9.4 where replica identity was introduced.
* Remove unused expected-output file.Robert Haas2015-08-14
|
* Reject isolation test specifications with duplicate step names.Robert Haas2015-08-14
| | | | | alter-table-1.spec has such a case, so change one instance of step rx1 to rx3 instead.
* Encoding PG_UHC is code page 949.Noah Misch2015-08-14
| | | | | | | | | | | This fixes presentation of non-ASCII messages to the Windows event log and console in rare cases involving Korean locale. Processes like the postmaster and checkpointer, but not processes attached to databases, were affected. Back-patch to 9.4, where MessageEncoding was introduced. The problem exists in all supported versions, but this change has no effect in the absence of the code recognizing PG_UHC MessageEncoding. Noticed while investigating bug #13427 from Dmitri Bourlatchkov.
* Restore old pgwin32_message_to_UTF16() behavior outside transactions.Noah Misch2015-08-14
| | | | | | | | | | | Commit 49c817eab78c6f0ce8c3bf46766b73d6cf3190b7 replaced with a hard error the dubious pg_do_encoding_conversion() behavior when outside a transaction. Reintroduce the historic soft failure locally within pgwin32_message_to_UTF16(). This fixes errors when writing messages in less-common encodings to the Windows event log or console. Back-patch to 9.4, where the aforementioned commit first appeared. Per bug #13427 from Dmitri Bourlatchkov.
* Reduce lock levels for ALTER TABLE SET autovacuum storage optionsSimon Riggs2015-08-14
| | | | | | | | | | | | Reduce lock levels down to ShareUpdateExclusiveLock for all autovacuum-related relation options when setting them using ALTER TABLE. Add infrastructure to allow varying lock levels for relation options in later patches. Setting multiple options together uses the highest lock level required for any option. Works for both main and toast tables. Fabrízio Mello, reviewed by Michael Paquier, mild edit and additional regression tests from myself
* PL/Python: Make tests pass with Python 3.5Peter Eisentraut2015-08-13
| | | | | | | | | The error message wording for AttributeError has changed in Python 3.5. For the plpython_error test, add a new expected file. In the plpython_subtransaction test, we didn't really care what the exception is, only that it is something coming from Python. So use a generic exception instead, which has a message that doesn't vary across versions.
* MSVC: Exclude 'brin' contrib moduleAlvaro Herrera2015-08-13
| | | | The build script is not able to parse the Makefile, so remove it.
* Re-add BRIN isolation testAlvaro Herrera2015-08-13
| | | | | | | | | | | | This time, instead of using a core isolation test, put it on its own test module; this way it can require the pageinspect module to be present before running. The module's Makefile is loosely modeled after test_decoding's, so that it's easy to add further tests for either pg_regress or isolationtester later. Backpatch to 9.5.
* Improve regression test case to avoid depending on system catalog stats.Tom Lane2015-08-13
| | | | | | | | | | | | | | | | | | | | | | | In commit 95f4e59c32866716 I added a regression test case that examined the plan of a query on system catalogs. That isn't a terribly great idea because the catalogs tend to change from version to version, or even within a version if someone makes an unrelated regression-test change that populates the catalogs a bit differently. Usually I try to make planner test cases rely on test tables that have not changed since Berkeley days, but I got sloppy in this case because the submitted crasher example queried the catalogs and I didn't spend enough time on rewriting it. But it was a problem waiting to happen, as I was rudely reminded when I tried to port that patch into Salesforce's Postgres variant :-(. So spend a little more effort and rewrite the query to not use any system catalogs. I verified that this version still provokes the Assert if 95f4e59c32866716's code fix is reverted. I also removed the EXPLAIN output from the test, as it turns out that the assertion occurs while considering a plan that isn't the one ultimately selected anyway; so there's no value in risking any cross-platform variation in that printout. Back-patch to 9.2, like the previous patch.
* Run autoheader to add a few missing #defines to pg_config.h.in.Heikki Linnakangas2015-08-13
| | | | | | | | These are emitted by the new ax_pthread.m4 script version. They are not used for anything in PostgreSQL, but let's keep the generated header file up-to-date. Andres Freund
* Fix declaration of isarray variable.Michael Meskes2015-08-13
| | | | Found and fixed by Andres Freund.
* Fix unitialized variablesAlvaro Herrera2015-08-13
| | | | | | | | | As complained by clang, reported by Andres Freund. Brown paper bag bug in ccc4c074994d. Add some comments, too. Backpatch to 9.5, like that one.
* Undo mistaken tightening in join_is_legal().Tom Lane2015-08-12
| | | | | | | | | | | | | | | | | | | One of the changes I made in commit 8703059c6b55c427 turns out not to have been such a good idea: we still need the exception in join_is_legal() that allows a join if both inputs already overlap the RHS of the special join we're checking. Otherwise we can miss valid plans, and might indeed fail to find a plan at all, as in recent report from Andreas Seltenreich. That code was added way back in commit c17117649b9ae23d, but I failed to include a regression test case then; my bad. Put it back with a better explanation, and a test this time. The logic does end up a bit different than before though: I now believe it's appropriate to make this check first, thereby allowing such a case whether or not we'd consider the previous SJ(s) to commute with this one. (Presumably, we already decided they did; but it was confusing to have this consideration in the middle of the code that was handling the other case.) Back-patch to all active branches, like the previous patch.
* Close some holes in BRIN page assignmentAlvaro Herrera2015-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some corner cases, it is possible for the BRIN index relation to be extended by brin_getinsertbuffer but the new page not be used immediately for anything by its callers; when this happens, the page is initialized and the FSM is updated (by brin_getinsertbuffer) with the info about that page, but these actions are not WAL-logged. A later index insert/update can use the page, but since the page is already initialized, the initialization itself is not WAL-logged then either. Replay of this sequence of events causes recovery to fail altogether. There is a related corner case within brin_getinsertbuffer itself, in which we extend the relation to put a new index tuple there, but later find out that we cannot do so, and do not return the buffer; the page obtained from extension is not even initialized. The resulting page is lost forever. To fix, shuffle the code so that initialization is not the responsibility of brin_getinsertbuffer anymore, in normal cases; instead, the initialization is done by its callers (brin_doinsert and brin_doupdate) once they're certain that the page is going to be used. When either those functions determine that the new page cannot be used, before bailing out they initialize the page as an empty regular page, enter it in FSM and WAL-log all this. This way, the page is usable for future index insertions, and WAL replay doesn't find trying to insert tuples in pages whose initialization didn't make it to the WAL. The same strategy is used in brin_getinsertbuffer when it cannot return the new page. Additionally, add a new step to vacuuming so that all pages of the index are scanned; whenever an uninitialized page is found, it is initialized as empty and WAL-logged. This closes the hole that the relation is extended but the system crashes before anything is WAL-logged about it. We also take this opportunity to update the FSM, in case it has gotten out of date. Thanks to Heikki Linnakangas for finding the problem that kicked some additional analysis of BRIN page assignment code. Backpatch to 9.5, where BRIN was introduced. Discussion: https://www.postgresql.org/message-id/20150723204810.GY5596@postgresql.org
* Remove duplicated assignment in pg_create_physical_replication_slot.Andres Freund2015-08-12
| | | | Reported-By: Gurjeet Singh
* Handle PQresultErrorField(PG_DIAG_SQLSTATE) returning NULL in streamutil.c.Andres Freund2015-08-12
| | | | | | | | | | In ff27db5d I missed that PQresultErrorField() may return NULL if there's no sqlstate associated with an error. Spotted-By: Coverity Reported-By: Michael Paquier Discussion: CAB7nPqQ3o10SY6NVdU4pjq85GQTN5tbbkq2gnNUh2fBNU3rKyQ@mail.gmail.com Backpatch: 9.5, like ff27db5d
* Fix two off-by-one errors in bufmgr.c.Andres Freund2015-08-12
| | | | | | | | | | | | | | | | | | | In 4b4b680c I passed a buffer index number (starting from 0) instead of a proper Buffer id (which start from 1 for shared buffers) in two places. This wasn't noticed so far as one of those locations isn't compiled at all (PrintPinnedBufs) and the other one (InvalidBuffer) requires a unlikely, but possible, set of circumstances to trigger a symptom. To reduce the likelihood of such incidents a bit also convert existing open coded mappings from buffer descriptors to buffer ids with BufferDescriptorGetBuffer(). Author: Qingqing Zhou Reported-By: Qingqing Zhou Discussion: CAJjS0u2ai9ooUisKtkV8cuVUtEkMTsbK8c7juNAjv8K11zeCQg@mail.gmail.com Backpatch: 9.5 where the private ref count infrastructure was introduced
* Fix some possible low-memory failures in regexp compilation.Tom Lane2015-08-12
| | | | | | | | | newnfa() failed to set the regex error state when malloc() fails. Several places in regcomp.c failed to check for an error after calling subre(). Each of these mistakes could lead to null-pointer-dereference crashes in memory-starved backends. Report and patch by Andreas Seltenreich. Back-patch to all branches.
* Postpone extParam/allParam calculations until the very end of planning.Tom Lane2015-08-11
| | | | | | | | | | | | | | | | | | | | | | | | Until now we computed these Param ID sets at the end of subquery_planner, but that approach depends on subquery_planner returning a concrete Plan tree. We would like to switch over to returning one or more Paths for a subquery, and in that representation the necessary details aren't fully fleshed out (not to mention that we don't really want to do this work for Paths that end up getting discarded). Hence, refactor so that we can compute the param ID sets at the end of planning, just before set_plan_references is run. The main change necessary to make this work is that we need to capture the set of outer-level Param IDs available to the current query level before exiting subquery_planner, since the outer levels' plan_params lists are transient. (That's not going to pose a problem for returning Paths, since all the work involved in producing that data is part of expression preprocessing, which will continue to happen before Paths are produced.) On the plus side, this change gets rid of several existing kluges. Eventually I'd like to get rid of SS_finalize_plan altogether in favor of doing this work during set_plan_references, but that will require some complex rejiggering because SS_finalize_plan needs to visit subplans and initplans before the main plan. So leave that idea for another day.
* Don't include rel.h when relcache.h is sufficientAlvaro Herrera2015-08-11
| | | | Trivial change to reduce exposure of rel.h.