aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Stamp 9.1.21.REL9_1_21Tom Lane2016-03-28
|
* Translation updatesPeter Eisentraut2016-03-28
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: dbf5cd90475f35d96a3df107a00e7fea082c6b89
* Change various Gin*Is* macros to return 0/1.Andres Freund2016-03-27
| | | | | | | | | | | | | | | | | | | | | | Returning the direct result of bit arithmetic, in a macro intended to be used in a boolean manner, can be problematic if the return value is stored in a variable of type 'bool'. If bool is implemented using C99's _Bool, that can lead to comparison failures if the variable is then compared again with the expression (see ginStepRight() for an example that fails), as _Bool forces the result to be 0/1. That happens in some configurations of newer MSVC compilers. It's also problematic when storing the result of such an expression in a narrower type. Several gin macros have been declared in that style since gin's initial commit in 8a3631f8d86. There's a lot more macros like this, but this is the only one causing regression test failures; and I don't want to commit and backpatch a larger patch with lots of conflicts just before the next set of minor releases. Discussion: 20150811154237.GD17575@awork2.anarazel.de Backpatch: All supported branches
* Modernize zic's test for valid timezone abbreviations.Tom Lane2016-03-26
| | | | | | | | | We really need to sync all of our IANA-derived timezone code with upstream, but that's going to be a large patch and I certainly don't care to shove such a thing into stable branches immediately before a release. As a stopgap, copy just the tzcode2016c logic that checks validity of timezone abbreviations. This prevents getting multiple "time zone abbreviation differs from POSIX standard" bleats with tzdata 2014b and later.
* Update time zone data files to tzdata release 2016c.Tom Lane2016-03-25
| | | | | | | | | | | | | | | | DST law changes in Azerbaijan, Chile, Haiti, Palestine, and Russia (Altai, Astrakhan, Kirov, Sakhalin, Ulyanovsk regions). Historical corrections for Lithuania, Moldova, Russia (Kaliningrad, Samara, Volgograd). As of 2015b, the keepers of the IANA timezone database started to use numeric time zone abbreviations (e.g., "+04") instead of inventing abbreviations not found in the wild like "ASTT". This causes our rather old copy of zic to whine "warning: time zone abbreviation differs from POSIX standard" several times during "make install". This warning is harmless according to the IANA folk, and I don't see any problems with these abbreviations in some simple tests; but it seems like now would be a good time to update our copy of the tzcode stuff. I'll look into that soon.
* Remove dependency on psed for MSVC builds.Andrew Dunstan2016-03-19
| | | | | | | | | | | | Modern Perl has removed psed from its core distribution, so it might not be readily available on some build platforms. We therefore replace its use with a Perl script generated by s2p, which is equivalent to the sed script. The latter is retained for non-MSVC builds to avoid creating a new hard dependency on Perl for non-Windows tarball builds. Backpatch to all live branches. Michael Paquier and me.
* Cope if platform declares mbstowcs_l(), but not locale_t, in <xlocale.h>.Tom Lane2016-03-15
| | | | | | | | | | | | | | | | | | | | | | | Previously, we included <xlocale.h> only if necessary to get the definition of type locale_t. According to notes in PGAC_TYPE_LOCALE_T, this is important because on some versions of glibc that file supplies an incompatible declaration of locale_t. (This info may be obsolete, because on my RHEL6 box that seems to be the *only* definition of locale_t; but there may still be glibc's in the wild for which it's a live concern.) It turns out though that on FreeBSD and maybe other BSDen, you can get locale_t from stdlib.h or locale.h but mbstowcs_l() and friends only from <xlocale.h>. This was leaving us compiling calls to mbstowcs_l() and friends with no visible prototype, which causes a warning and could possibly cause actual trouble, since it's not declared to return int. Hence, adjust the configure checks so that we'll include <xlocale.h> either if it's necessary to get type locale_t or if it's necessary to get a declaration of mbstowcs_l(). Report and patch by Aleksander Alekseev, somewhat whacked around by me. Back-patch to all supported branches, since we have been using mbstowcs_l() since 9.1.
* Add missing NULL terminator to list_SECURITY_LABEL_preposition[].Tom Lane2016-03-14
| | | | | | | | | | | | On the machines I tried this on, pressing TAB after SECURITY LABEL led to being offered ON and FOR as intended, plus random other keywords (varying across machines). But if you were a bit more unlucky you'd get a crash, as reported by nummervet@mail.ru in bug #14019. Seems to have been an aboriginal error in the SECURITY LABEL patch, commit 4d355a8336e0f226. Hence, back-patch to all supported versions. There's no bug in HEAD, though, thanks to our recent tab-completion rewrite.
* Avoid crash on old Windows with AVX2-capable CPU for VS2013 buildsMagnus Hagander2016-03-10
| | | | | | | | | | | | | | | | The Visual Studio 2013 CRT generates invalid code when it makes a 64-bit build that is later used on a CPU that supports AVX2 instructions using a version of Windows before 7SP1/2008R2SP1. Detect this combination, and in those cases turn off the generation of FMA3, per recommendation from the Visual Studio team. The bug is actually in the CRT shipping with Visual Studio 2013, but Microsoft have stated they're only fixing it in newer major versions. The fix is therefor conditioned specifically on being built with this version of Visual Studio, and not previous or later versions. Author: Christian Ullrich
* Avoid unlikely data-loss scenarios due to rename() without fsync.Andres Freund2016-03-09
| | | | | | | | | | | | | | | | | | | | | Renaming a file using rename(2) is not guaranteed to be durable in face of crashes. Use the previously added durable_rename()/durable_link_or_rename() in various places where we previously just renamed files. Most of the changed call sites are arguably not critical, but it seems better to err on the side of too much durability. The most prominent known case where the previously missing fsyncs could cause data loss is crashes at the end of a checkpoint. After the actual checkpoint has been performed, old WAL files are recycled. When they're filled, their contents are fdatasynced, but we did not fsync the containing directory. An OS/hardware crash in an unfortunate moment could then end up leaving that file with its old name, but new content; WAL replay would thus not replay it. Reported-By: Tomas Vondra Author: Michael Paquier, Tomas Vondra, Andres Freund Discussion: 56583BDD.9060302@2ndquadrant.com Backpatch: All supported branches
* Introduce durable_rename() and durable_link_or_rename().Andres Freund2016-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | Renaming a file using rename(2) is not guaranteed to be durable in face of crashes; especially on filesystems like xfs and ext4 when mounted with data=writeback. To be certain that a rename() atomically replaces the previous file contents in the face of crashes and different filesystems, one has to fsync the old filename, rename the file, fsync the new filename, fsync the containing directory. This sequence is not generally adhered to currently; which exposes us to data loss risks. To avoid having to repeat this arduous sequence, introduce durable_rename(), which wraps all that. Also add durable_link_or_rename(). Several places use link() (with a fallback to rename()) to rename a file, trying to avoid replacing the target file out of paranoia. Some of those rename sequences need to be durable as well. There seems little reason extend several copies of the same logic, so centralize the link() callers. This commit does not yet make use of the new functions; they're used in a followup commit. Author: Michael Paquier, Andres Freund Discussion: 56583BDD.9060302@2ndquadrant.com Backpatch: All supported branches
* Fix incorrect handling of NULL index entries in indexed ROW() comparisons.Tom Lane2016-03-09
| | | | | | | | | | | | | | | | | | | | | | An index search using a row comparison such as ROW(a, b) > ROW('x', 'y') would stop upon reaching a NULL entry in the "b" column, ignoring the fact that there might be non-NULL "b" values associated with later values of "a". This happens because _bt_mark_scankey_required() marks the subsidiary scankey for "b" as required, which is just wrong: it's for a column after the one with the first inequality key (namely "a"), and thus can't be considered a required match. This bit of brain fade dates back to the very beginnings of our support for indexed ROW() comparisons, in 2006. Kind of astonishing that no one came across it before Glen Takahashi, in bug #14010. Back-patch to all supported versions. Note: the given test case doesn't actually fail in unpatched 9.1, evidently because the fix for bug #6278 (i.e., stopping at nulls in either scan direction) is required to make it fail. I'm sure I could devise a case that fails in 9.1 as well, perhaps with something involving making a cursor back up; but it doesn't seem worth the trouble.
* plperl: Correctly handle empty arrays in plperl_ref_from_pg_array.Andres Freund2016-03-08
| | | | | | | | | | | | plperl_ref_from_pg_array() didn't consider the case that postgrs arrays can have 0 dimensions (when they're empty) and accessed the first dimension without a check. Fix that by special casing the empty array case. Author: Alex Hunsaker Reported-By: Andres Freund / valgrind / buildfarm animal skink Discussion: 20160308063240.usnzg6bsbjrne667@alap3.anarazel.de Backpatch: 9.1-
* Fix backwards test for Windows service-ness in pg_ctl.Tom Lane2016-03-07
| | | | | | | A thinko in a96761391 caused pg_ctl to get it exactly backwards when deciding whether to report problems to the Windows eventlog or to stderr. Per bug #14001 from Manuel Mathar, who also identified the fix. Like the previous patch, back-patch to all supported branches.
* Fix not-terribly-safe coding in NIImportOOAffixes() and NIImportAffixes().Tom Lane2016-03-06
| | | | | | | | | | | | | | | | | | There were two places in spell.c that supposed that they could search for a location in a string produced by lowerstr() and then transpose the offset into the original string. But this fails completely if lowerstr() transforms any characters into characters of different byte length, as can happen in Turkish UTF8 for instance. We'd added some comments about this coding in commit 51e78ab4ff328296, but failed to realize that it was not merely confusing but wrong. Coverity complained about this code years ago, but in such an opaque fashion that nobody understood what it was on about. I'm not entirely sure that this issue *is* what it's on about, actually, but perhaps this patch will shut it up -- and in any case the problem is clear. Back-patch to all supported branches.
* Fix compile breakage due to 0315dfa8f4afa8390383119330ca0bf241be4ad4.Robert Haas2016-03-04
| | | | I wasn't careful enough when back-patching.
* Fix query-based tab completion for multibyte characters.Robert Haas2016-03-04
| | | | | | | | | | The existing code confuses the byte length of the string (which is relevant when passing it to pg_strncasecmp) with the character length of the string (which is relevant when it is used with the SQL substring function). Separate those two concepts. Report and patch by Kyotaro Horiguchi, reviewed by Thomas Munro and reviewed and further revised by me.
* Improve error message for rejecting RETURNING clauses with dropped columns.Tom Lane2016-02-29
| | | | | | | | | | | | | | | This error message was written with only ON SELECT rules in mind, but since then we also made RETURNING-clause targetlists go through the same logic. This means that you got a rather off-topic error message if you tried to add a rule with RETURNING to a table having dropped columns. Ideally we'd just support that, but some preliminary investigation says that it might be a significant amount of work. Seeing that Nicklas Avén's complaint is the first one we've gotten about this in the ten years or so that the code's been like that, I'm unwilling to put much time into it. Instead, improve the error report by issuing a different message for RETURNING cases, and revise the associated comment based on this investigation. Discussion: 1456176604.17219.9.camel@jordogskog.no
* Fix typosAlvaro Herrera2016-02-29
| | | | Author: Amit Langote
* Avoid multiple free_struct_lconv() calls on same data.Tom Lane2016-02-28
| | | | | | | | | | | A failure partway through PGLC_localeconv() led to a situation where the next call would call free_struct_lconv() a second time, leading to free() on already-freed strings, typically leading to a core dump. Add a flag to remember whether we need to do that. Per report from Thom Brown. His example case only provokes the failure as far back as 9.4, but nonetheless this code is obviously broken, so back-patch to all supported branches.
* Correct StartupSUBTRANS for page wraparoundSimon Riggs2016-02-19
| | | | | | | | | | StartupSUBTRANS() incorrectly handled cases near the max pageid in the subtrans data structure, which in some cases could lead to errors in startup for Hot Standby. This patch wraps the pageids correctly, avoiding any such errors. Identified by exhaustive crash testing by Jeff Janes. Jeff Janes
* Make plpython cope with funny characters in function names.Tom Lane2016-02-16
| | | | | | | | | | | | | | A function name that's double-quoted in SQL can contain almost any characters, but we were using that name directly as part of the name generated for the Python-level function, and Python doesn't like anything that isn't pretty much a standard identifier. To fix, replace anything that isn't an ASCII letter or digit with an underscore in the generated name. This doesn't create any risk of duplicate Python function names because we were already appending the function OID to the generated name to ensure uniqueness. Per bug #13960 from Jim Nasby. Patch by Jim Nasby, modified a bit by me. Back-patch to all supported branches.
* Suppress compiler warnings about useless comparison of unsigned to zero.Tom Lane2016-02-15
| | | | | | | | | | | | Reportedly, some compilers warn about tests like "c < 0" if c is unsigned, and hence complain about the character range checks I added in commit 3bb3f42f3749d40b8d4de65871e8d828b18d4a45. This is a bit of a pain since the regex library doesn't really want to assume that chr is unsigned. However, since any such reconfiguration would involve manual edits of regcustom.h anyway, we can put it on the shoulders of whoever wants to do that to adjust this new range-checking macro correctly. Per gripes from Coverity and Andres.
* Accept pg_ctl timeout from the PGCTLTIMEOUT environment variable.Noah Misch2016-02-10
| | | | | | | | | | | Many automated test suites call pg_ctl. Buildfarm members axolotl, hornet, mandrill, shearwater, sungazer and tern have failed when server shutdown took longer than the pg_ctl default 60s timeout. This addition permits slow hosts to easily raise the timeout without us editing a --timeout argument into every test suite pg_ctl call. Back-patch to 9.1 (all supported versions) for the sake of automated testing. Reviewed by Tom Lane.
* Avoid use of sscanf() to parse ispell dictionary files.Tom Lane2016-02-10
| | | | | | | | | | | | | | | | | | | | | | | | It turns out that on FreeBSD-derived platforms (including OS X), the *scanf() family of functions is pretty much brain-dead about multibyte characters. In particular it will apply isspace() to individual bytes of input even when those bytes are part of a multibyte character, thus allowing false recognition of a field-terminating space. We appear to have little alternative other than instituting a coding rule that *scanf() is not to be used if the input string might contain multibyte characters. (There was some discussion of relying on "%ls", but that probably just moves the portability problem somewhere else, and besides it doesn't fully prevent BSD *scanf() from using isspace().) This patch is a down payment on that: it gets rid of use of sscanf() to parse ispell dictionary files, which are certainly at great risk of having a problem. The code is cleaner this way anyway, though a bit longer. In passing, improve a few comments. Report and patch by Artur Zakirov, reviewed and somewhat tweaked by me. Back-patch to all supported branches.
* Stamp 9.1.20.REL9_1_20Tom Lane2016-02-08
|
* Translation updatesPeter Eisentraut2016-02-08
| | | | | Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: bbedbfae7586389e1f43b8116d76af3ac528c211
* Fix some regex issues with out-of-range characters and large char ranges.Tom Lane2016-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a bad choice because it is outside the range of type "celt" (int32). Characters approaching that limit could lead to infinite loops in logic such as "for (c = a; c <= b; c++)" where c is of type celt but the range bounds are chr. Such loops will work safely only if CHR_MAX+1 is representable in celt, since c must advance to beyond b before the loop will exit. Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe. It's highly unlikely that Unicode will ever assign codes that high, and none of our other backend encodings need characters beyond that either. In addition to modifying the macro, we have to explicitly enforce character range restrictions on the values of \u, \U, and \x escape sequences, else the limit is trivially bypassed. Also, the code for expanding case-independent character ranges in bracket expressions had a potential integer overflow in its calculation of the number of characters it could generate, which could lead to allocating too small a character vector and then overwriting memory. An attacker with the ability to supply arbitrary regex patterns could easily cause transient DOS via server crashes, and the possibility for privilege escalation has not been ruled out. Quite aside from the integer-overflow problem, the range expansion code was unnecessarily inefficient in that it always produced a result consisting of individual characters, abandoning the knowledge that we had a range to start with. If the input range is large, this requires excessive memory. Change it so that the original range is reported as-is, and then we add on any case-equivalent characters that are outside that range. With this approach, we can bound the number of individual characters allowed without sacrificing much. This patch allows at most 100000 individual characters, which I believe to be more than the number of case pairs existing in Unicode, so that the restriction will never be hit in practice. It's still possible for range() to take awhile given a large character code range, so also add statement-cancel detection to its loop. The downstream function dovec() also lacked cancel detection, and could take a long time given a large output from range(). Per fuzz testing by Greg Stark. Back-patch to all supported branches. Security: CVE-2016-0773
* Force certain "pljava" custom GUCs to be PGC_SUSET.Noah Misch2016-02-05
| | | | | | | Future PL/Java versions will close CVE-2016-0766 by making these GUCs PGC_SUSET. This PostgreSQL change independently mitigates that PL/Java vulnerability, helping sites that update PostgreSQL more frequently than PL/Java. Back-patch to 9.1 (all supported versions).
* Update time zone data files to tzdata release 2016a.Tom Lane2016-02-05
| | | | | DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory (Zabaykalsky Krai). Historical corrections for Pakistan.
* In pg_dump, ensure that view triggers are processed after view rules.Tom Lane2016-02-04
| | | | | | | | | | | | | | | | | | | If a view is split into CREATE TABLE + CREATE RULE to break a circular dependency, then any triggers on the view must be dumped/reloaded after the CREATE RULE; else the backend may reject the CREATE TRIGGER because it's the wrong type of trigger for a plain table. This works all right in plain dump/restore because of pg_dump's sorting heuristic that places triggers after rules. However, when using parallel restore, the ordering must be enforced by a dependency --- and we didn't have one. Fixing this is a mere matter of adding an addObjectDependency() call, except that we need to be able to find all the triggers belonging to the view relation, and there was no easy way to do that. Add fields to pg_dump's TableInfo struct to remember where the associated TriggerInfo struct(s) are. Per bug report from Dennis Kögel. The failure can be exhibited at least as far back as 9.1, so back-patch to all supported branches.
* Make sure ecpg header files do not have a comment lasting several lines, one ofMichael Meskes2016-02-01
| | | | which is a preprocessor directive. This leads ecpg to incorrectly parse the comment as nested.
* Fix incorrect pattern-match processing in psql's \det command.Tom Lane2016-01-29
| | | | | | | | | | | | | | listForeignTables' invocation of processSQLNamePattern did not match up with the other ones that handle potentially-schema-qualified names; it failed to make use of pg_table_is_visible() and also passed the name arguments in the wrong order. Bug seems to have been aboriginal in commit 0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't inquire too closely into the behavior, although the silliness was later exposed by inconsistencies in the test queries added by 59efda3e50ca4de6 (which I probably should have questioned at the time, but didn't). Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane. Back-patch to all affected branches.
* Fix startup so that log prefix %h works for the log_connections message.Tom Lane2016-01-26
| | | | | | | We entirely randomly chose to initialize port->remote_host just after printing the log_connections message, when we could perfectly well do it just before, allowing %h and %r to work for that message. Per gripe from Artem Tomyuk.
* Properly install dynloader.h on MSVC buildsBruce Momjian2016-01-19
| | | | | | | | | | | This will enable PL/Java to be cleanly compiled, as dynloader.h is a requirement. Report by Chapman Flack Patch by Michael Paquier Backpatch through 9.1
* Properly close token in sspi authenticationMagnus Hagander2016-01-14
| | | | | | | | We can never leak more than one token, but we shouldn't do that. We don't bother closing it in the error paths since the process will exit shortly anyway. Christian Ullrich
* Handle extension members when first setting object dump flags in pg_dump.Tom Lane2016-01-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_dump's original approach to handling extension member objects was to run around and clear (or set) their dump flags rather late in its data collection process. Unfortunately, quite a lot of code expects those flags to be valid before that; which was an entirely reasonable expectation before we added extensions. In particular, this explains Karsten Hilbert's recent report of pg_upgrade failing on a database in which an extension has been installed into the pg_catalog schema. Its objects are initially marked as not-to-be-dumped on the strength of their schema, and later we change them to must-dump because we're doing a binary upgrade of their extension; but we've already skipped essential tasks like making associated DO_SHELL_TYPE objects. To fix, collect extension membership data first, and incorporate it in the initial setting of the dump flags, so that those are once again correct from the get-go. This has the undesirable side effect of slightly lengthening the time taken before pg_dump acquires table locks, but testing suggests that the increase in that window is not very much. Along the way, get rid of ugly special-case logic for deciding whether to dump procedural languages, FDWs, and foreign servers; dump decisions for those are now correct up-front, too. In 9.3 and up, this also fixes erroneous logic about when to dump event triggers (basically, they were *always* dumped before). In 9.5 and up, transform objects had that problem too. Since this problem came in with extensions, back-patch to all supported versions.
* Clean up some lack-of-STRICT issues in the core code, too.Tom Lane2016-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A scan for missed proisstrict markings in the core code turned up these functions: brin_summarize_new_values pg_stat_reset_single_table_counters pg_stat_reset_single_function_counters pg_create_logical_replication_slot pg_create_physical_replication_slot pg_drop_replication_slot The first three of these take OID, so a null argument will normally look like a zero to them, resulting in "ERROR: could not open relation with OID 0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX functions. The other three will dump core on a null argument, though this is mitigated by the fact that they won't do so until after checking that the caller is superuser or has rolreplication privilege. In addition, the pg_logical_slot_get/peek[_binary]_changes family was intentionally marked nonstrict, but failed to make nullness checks on all the arguments; so again a null-pointer-dereference crash is possible but only for superusers and rolreplication users. Add the missing ARGISNULL checks to the latter functions, and mark the former functions as strict in pg_proc. Make that change in the back branches too, even though we can't force initdb there, just so that installations initdb'd in future won't have the issue. Since none of these bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX functions hardly misbehave at all), it seems sufficient to do this. In addition, fix some order-of-operations oddities in the slot_get_changes family, mostly cosmetic, but not the part that moves the function's last few operations into the PG_TRY block. As it stood, there was significant risk for an error to exit without clearing historical information from the system caches. The slot_get_changes bugs go back to 9.4 where that code was introduced. Back-patch appropriate subsets of the pg_proc changes into all active branches, as well.
* Clean up code for widget_in() and widget_out().Tom Lane2016-01-09
| | | | | | | | | | | | | | | | | | | | | | | | | Given syntactically wrong input, widget_in() could call atof() with an indeterminate pointer argument, typically leading to a crash; or if it didn't do that, it might return a NULL pointer, which again would lead to a crash since old-style C functions aren't supposed to do things that way. Fix that by correcting the off-by-one syntax test and throwing a proper error rather than just returning NULL. Also, since widget_in and widget_out have been marked STRICT for a long time, their tests for null inputs are just dead code; remove 'em. In the oldest branches, also improve widget_out to use snprintf not sprintf, just to be sure. In passing, get rid of a long-since-useless sprintf into a local buffer that nothing further is done with, and make some other minor coding style cleanups. In the intended regression-testing usage of these functions, none of this is very significant; but if the regression test database were left around in a production installation, these bugs could amount to a minor security hazard. Piotr Stefaniak, Michael Paquier, and Tom Lane
* Add STRICT to some C functions created by the regression tests.Tom Lane2016-01-09
| | | | | | | | | | These functions readily crash when passed a NULL input value. The tests themselves do not pass NULL values to them; but when the regression database is used as a basis for fuzz testing, they cause a lot of noise. Also, if someone were to leave a regression database lying about in a production installation, these would create a minor security hazard. Andreas Seltenreich
* Fix unobvious interaction between -X switch and subdirectory creation.Tom Lane2016-01-07
| | | | | | | | | Turns out the only reason initdb -X worked is that pg_mkdir_p won't whine if you point it at something that's a symlink to a directory. Otherwise, the attempt to create pg_xlog/ just like all the other subdirectories would have failed. Let's be a little more explicit about what's happening. Oversight in my patch for bug #13853 (mea culpa for not testing -X ...)
* Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.Tom Lane2016-01-07
| | | | | | | | | | | | | | When we're creating subdirectories of PGDATA during initdb, we know darn well that the parent directory exists (or should exist) and that the new subdirectory doesn't (or shouldn't). There is therefore no need to use anything more complicated than mkdir(). Using pg_mkdir_p() just opens us up to unexpected failure modes, such as the one exhibited in bug #13853 from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there, but it is clear that we didn't need to be trying to create parent directories in the first place. We're not even saving any code, as proven by the fact that this patch nets out at minus five lines. Since this is a response to a field bug report, back-patch to all branches.
* Windows: Make pg_ctl reliably detect service statusAlvaro Herrera2016-01-07
| | | | | | | | | | | | | | | | | | pg_ctl is using isatty() to verify whether the process is running in a terminal, and if not it sends its output to Windows' Event Log ... which does the wrong thing when the output has been redirected to a pipe, as reported in bug #13592. To fix, make pg_ctl use the code we already have to detect service-ness: in the master branch, move src/backend/port/win32/security.c to src/port (with suitable tweaks so that it runs properly in backend and frontend environments); pg_ctl already has access to pgport so it Just Works. In older branches, that's likely to cause trouble, so instead duplicate the required code in pg_ctl.c. Author: Michael Paquier Bug report and diagnosis: Egon Kocjan Backpatch: all supported branches
* Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.Tom Lane2016-01-04
| | | | | | | | | | | | | | | | | | | | | | | | | | pgwin32_recv() has treated a non-error return of zero bytes from WSARecv() as being a reason to block ever since the current implementation was introduced in commit a4c40f140d23cefb. However, so far as one can tell from Microsoft's documentation, that is just wrong: what it means is graceful connection closure (in stream protocols) or receipt of a zero-length message (in message protocols), and neither case should result in blocking here. The only reason the code worked at all was that control then fell into the retry loop, which did *not* treat zero bytes specially, so we'd get out after only wasting some cycles. But as of 9.5 we do not normally reach the retry loop and so the bug is exposed, as reported by Shay Rojansky and diagnosed by Andres Freund. Remove the unnecessary test on the byte count, and rearrange the code in the retry loop so that it looks identical to the initial sequence. Back-patch of commit 90e61df8130dc7051a108ada1219fb0680cb3eb6. The original plan was to apply this only to 9.5 and up, but after discussion and buildfarm testing, it seems better to back-patch. The noblock code path has been at risk of this problem since it was introduced (in 9.0); if it did happen in pre-9.5 branches, the symptom would be that a walsender would wait indefinitely rather than noticing a loss of connection. While we lack proof that the case has been seen in the field, it seems possible that it's happened without being reported.
* Teach pg_dump to quote reloption values safely.Tom Lane2016-01-02
| | | | | | | Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected the fact that several code paths in pg_dump were printing reloptions values that had not gotten massaged by ruleutils. Apply essentially the same quoting logic in those places, too.
* Teach flatten_reloptions() to quote option values safely.Tom Lane2016-01-01
| | | | | | | | | | | | | | | | | | | | | | | | | | flatten_reloptions() supposed that it didn't really need to do anything beyond inserting commas between reloption array elements. However, in principle the value of a reloption could be nearly anything, since the grammar allows a quoted string there. Any restrictions on it would come from validity checking appropriate to the particular option, if any. A reloption value that isn't a simple identifier or number could thus lead to dump/reload failures due to syntax errors in CREATE statements issued by pg_dump. We've gotten away with not worrying about this so far with the core-supported reloptions, but extensions might allow reloption values that cause trouble, as in bug #13840 from Kouhei Sutou. To fix, split the reloption array elements explicitly, and then convert any value that doesn't look like a safe identifier to a string literal. (The details of the quoting rule could be debated, but this way is safe and requires little code.) While we're at it, also quote reloption names if they're not safe identifiers; that may not be a likely problem in the field, but we might as well try to be bulletproof here. It's been like this for a long time, so back-patch to all supported branches. Kouhei Sutou, adjusted some by me
* Add some more defenses against silly estimates to gincostestimate().Tom Lane2016-01-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A report from Andy Colson showed that gincostestimate() was not being nearly paranoid enough about whether to believe the statistics it finds in the index metapage. The problem is that the metapage stats (other than the pending-pages count) are only updated by VACUUM, and in the worst case could still reflect the index's original empty state even when it has grown to many entries. We attempted to deal with that by scaling up the stats to match the current index size, but if nEntries is zero then scaling it up still gives zero. Moreover, the proportion of pages that are entry pages vs. data pages vs. pending pages is unlikely to be estimated very well by scaling if the index is now orders of magnitude larger than before. We can improve matters by expanding the use of the rule-of-thumb estimates I introduced in commit 7fb008c5ee59b040: if the index has grown by more than a cutoff amount (here set at 4X growth) since VACUUM, then use the rule-of-thumb numbers instead of scaling. This might not be exactly right but it seems much less likely to produce insane estimates. I also improved both the scaling estimate and the rule-of-thumb estimate to account for numPendingPages, since it's reasonable to expect that that is accurate in any case, and certainly pages that are in the pending list are not either entry or data pages. As a somewhat separate issue, adjust the estimation equations that are concerned with extra fetches for partial-match searches. These equations suppose that a fraction partialEntries / numEntries of the entry and data pages will be visited as a consequence of a partial-match search. Now, it's physically impossible for that fraction to exceed one, but our estimate of partialEntries is mostly bunk, and our estimate of numEntries isn't exactly gospel either, so we could arrive at a silly value. In the example presented by Andy we were coming out with a value of 100, leading to insane cost estimates. Clamp the fraction to one to avoid that. Like the previous patch, back-patch to all supported branches; this problem can be demonstrated in one form or another in all of them.
* Rework internals of changing a type's ownershipAlvaro Herrera2015-12-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is necessary so that REASSIGN OWNED does the right thing with composite types, to wit, that it also alters ownership of the type's pg_class entry -- previously, the pg_class entry remained owned by the original user, which caused later other failures such as the new owner's inability to use ALTER TYPE to rename an attribute of the affected composite. Also, if the original owner is later dropped, the pg_class entry becomes owned by a non-existant user which is bogus. To fix, create a new routine AlterTypeOwner_oid which knows whether to pass the request to ATExecChangeOwner or deal with it directly, and use that in shdepReassignOwner rather than calling AlterTypeOwnerInternal directly. AlterTypeOwnerInternal is now simpler in that it only modifies the pg_type entry and recurses to handle a possible array type; higher-level tasks are handled by either AlterTypeOwner directly or AlterTypeOwner_oid. I took the opportunity to add a few more objects to the test rig for REASSIGN OWNED, so that more cases are exercised. Additional ones could be added for superuser-only-ownable objects (such as FDWs and event triggers) but I didn't want to push my luck by adding a new superuser to the tests on a backpatchable bug fix. Per bug #13666 reported by Chris Pacejo. This is a backpatch of commit 756e7b4c9db1 to branches 9.1 -- 9.4.
* adjust ACL owners for REASSIGN and ALTER OWNER TOAlvaro Herrera2015-12-21
| | | | | | | | | | | | | | | | | | | | | | When REASSIGN and ALTER OWNER TO are used, both the object owner and ACL list should be changed from the old owner to the new owner. This patch fixes types, foreign data wrappers, and foreign servers to change their ACL list properly; they already changed owners properly. Report by Alexey Bashtanov This is a backpatch of commit 59367fdf97c (for bug #9923) by Bruce Momjian to branches 9.1 - 9.4; it wasn't backpatched originally out of concerns that it would create a backwards compatibility problem, but per discussion related to bug #13666 that turns out to have been misguided. (Therefore, the entry in the 9.5 release notes should be removed.) Note that 9.1 didn't have privileges on types (which were introduced by commit 729205571e81), so this commit only changes foreign-data related objects in that branch. Discussion: http://www.postgresql.org/message-id/20151216224004.GL2618@alvherre.pgsql http://www.postgresql.org/message-id/10227.1450373793@sss.pgh.pa.us
* Remove silly completion for "DELETE FROM tabname ...".Tom Lane2015-12-20
| | | | | | psql offered USING, WHERE, and SET in this context, but SET is not a valid possibility here. Seems to have been a thinko in commit f5ab0a14ea83eb6c which added DELETE's USING option.