| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
| |
The fsync code from the backend essentially assumes that somebody's already
validated PGDATA, at least to the extent of it being a readable directory.
That's safe enough for initdb's normal code path too, but "initdb -S"
doesn't have any other processing at all that touches the target directory.
To have reasonable error-case behavior, add a pg_check_dir call.
Per gripe from Peter E.
|
|
|
|
|
|
|
|
|
|
| |
The argument that this is a sufficiently-expected case to be silently
ignored seems pretty thin. Andres had brought it up back when we were
still considering that most fsync failures should be hard errors, and it
probably would be legit not to fail hard for ETXTBSY --- but the same is
true for EROFS and other cases, which is why we gave up on hard failures.
ETXTBSY is surely not a normal case, so logging the failure seems fine
from here.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make initdb's version of this logic look as much like the backend's
as possible. This is much less critical than in the backend since not
so many people use "initdb -S", but we want the same corner-case error
handling in both cases.
Back-patch to 9.3 where initdb -S option was introduced. Before that,
initdb only had to deal with freshly-created data directories, wherein
no failures should be expected.
Abhijit Menon-Sen
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 2ce439f3379aed857517c8ce207485655000fc8e introduced a rather serious
regression, namely that if its scan of the data directory came across any
un-fsync-able files, it would fail and thereby prevent database startup.
Worse yet, symlinks to such files also caused the problem, which meant that
crash restart was guaranteed to fail on certain common installations such
as older Debian.
After discussion, we agreed that (1) failure to start is worse than any
consequence of not fsync'ing is likely to be, therefore treat all errors
in this code as nonfatal; (2) we should not chase symlinks other than
those that are expected to exist, namely pg_xlog/ and tablespace links
under pg_tblspc/. The latter restriction avoids possibly fsync'ing a
much larger part of the filesystem than intended, if the user has left
random symlinks hanging about in the data directory.
This commit takes care of that and also does some code beautification,
mainly moving the relevant code into fd.c, which seems a much better place
for it than xlog.c, and making sure that the conditional compilation for
the pre_sync_fname pass has something to do with whether pg_flush_data
works.
I also relocated the call site in xlog.c down a few lines; it seems a
bit silly to be doing this before ValidateXLOGDirectoryStructure().
The similar logic in initdb.c ought to be made to match this, but that
change is noncritical and will be dealt with separately.
Back-patch to all active branches, like the prior commit.
Abhijit Menon-Sen and Tom Lane
|
|
|
|
|
|
|
| |
Seems to have been an oversight in the original leakproofness patch.
Per report and patch from Jeevan Chalke.
In passing, prettify some awkward leakproof-related code in AlterFunction.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
specparse.y and specscanner.l used "string" as a token name. Now, bison
likes to define each token name as a macro for the token code it assigns,
which means those names are basically off-limits for any other use within
the grammar file or included headers. So names as generic as "string" are
dangerous. This is what was causing the recent failures on protosciurus:
some versions of Solaris' sys/kstat.h use "string" as a field name.
With late-model bison we don't see this problem because the token macros
aren't defined till later (that is why castoroides didn't show the problem
even though it's on the same machine). But protosciurus uses bison 1.875
which defines the token macros up front.
This land mine has been there from day one; we'd have found it sooner
except that protosciurus wasn't trying to run the isolation tests till
recently.
To fix, rename the token to "string_literal" which is hopefully less
likely to collide with names used by system headers. Back-patch to
all branches containing the isolation tests.
|
|
|
|
|
|
|
|
|
|
|
| |
Multixact truncation is now handled differently, and this file hadn't
gotten the memo.
Per note from Amit Langote. I didn't use his patch, though.
Also update the description of infomask bits, which weren't completely up
to date either. This commit also propagates b01a4f6838 back to 9.3 and
9.4, which apparently I failed to do back then.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The name objectType is widely used as a field name, and it's pure luck that
this conflict has not caused pgindent to go crazy before. It messed up
pg_audit.c pretty good though. Since pg_shdepend.c doesn't export this
typedef and only uses it in three places, changing that seems saner than
changing the field usages.
Back-patch because we're contemplating using the union of all branch
typedefs for future pgindent runs, so this won't fix anything if it
stays the same in back branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since 7.3.2, libpq has been coded in such a way that the only SSL protocol
it would allow was TLS v1. That approach is looking increasingly obsolete.
In commit 820f08cabdcbb899 we fixed it to allow TLS >= v1, but did not
back-patch the change at the time, partly out of caution and partly because
the question was confused by a contemporary server-side change to reject
the now-obsolete SSL protocol v3. 9.4 has now been out long enough that
it seems safe to assume the change is OK; hence, back-patch into 9.0-9.3.
(I also chose to back-patch some relevant comments added by commit
326e1d73c476a0b5, but did *not* change the server behavior; hence, pre-9.4
servers will continue to allow SSL v3, even though no remotely modern
client will request it.)
Per gripe from Jan Bilek.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 16304a013432931e61e623c8d85e9fe24709d9ba, except
for its changes in src/port/snprintf.c; as well as commit
cac18a76bb6b08f1ecc2a85e46c9d2ab82dd9d23 which is no longer needed.
Fujii Masao reported that the previous commit caused failures in psql on
OS X, since if one exits the pager program early while viewing a query
result, psql sees an EPIPE error from fprintf --- and the wrapper function
thought that was reason to panic. (It's a bit surprising that the same
does not happen on Linux.) Further discussion among the security list
concluded that the risk of other such failures was far too great, and
that the one-size-fits-all approach to error handling embodied in the
previous patch is unlikely to be workable.
This leaves us again exposed to the possibility of the type of failure
envisioned in CVE-2015-3166. However, that failure mode is strictly
hypothetical at this point: there is no concrete reason to believe that
an attacker could trigger information disclosure through the supposed
mechanism. In the first place, the attack surface is fairly limited,
since so much of what the backend does with format strings goes through
stringinfo.c or psprintf(), and those already had adequate defenses.
In the second place, even granting that an unprivileged attacker could
control the occurrence of ENOMEM with some precision, it's a stretch to
believe that he could induce it just where the target buffer contains some
valuable information. So we concluded that the risk of non-hypothetical
problems induced by the patch greatly outweighs the security risks.
We will therefore revert, and instead undertake closer analysis to
identify specific calls that may need hardening, rather than attempt a
universal solution.
We have kept the portion of the previous patch that improved snprintf.c's
handling of errors when it calls the platform's sprintf(). That seems to
be an unalloyed improvement.
Security: CVE-2015-3166
|
|
|
|
|
|
|
|
|
|
|
|
| |
The point of the assertion is to ensure that the arrays allocated in stack
are large enough, but the check was one item short.
This won't matter in practice because MaxIndexTuplesPerPage is an
overestimate, so you can't have that many items on a page in reality.
But let's be tidy.
Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like
the patch that added the assertion.
|
| |
|
|
|
|
|
|
|
| |
The old one didn't include %m anywhere, and required extra
translation.
Report by Peter Eisentraut. Fix by me. Review by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL already checked the vast majority of these, missing this
handful that nearly cannot fail. If putenv() failed with ENOMEM in
pg_GSS_recvauth(), authentication would proceed with the wrong keytab
file. If strftime() returned zero in cache_locale_time(), using the
unspecified buffer contents could lead to information exposure or a
crash. Back-patch to 9.0 (all supported versions).
Other unchecked calls to these functions, especially those in frontend
code, pose negligible security concern. This patch does not address
them. Nonetheless, it is always better to check return values whose
specification provides for indicating an error.
In passing, fix an off-by-one error in strftime_win32()'s invocation of
WideCharToMultiByte(). Upon retrieving a value of exactly MAX_L10N_DATA
bytes, strftime_win32() would overrun the caller's buffer by one byte.
MAX_L10N_DATA is chosen to exceed the length of every possible value, so
the vulnerable scenario probably does not arise.
Security: CVE-2015-3166
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All known standard library implementations of these functions can fail
with ENOMEM. A caller neglecting to check for failure would experience
missing output, information exposure, or a crash. Check return values
within wrappers and code, currently just snprintf.c, that bypasses the
wrappers. The wrappers do not return after an error, so their callers
need not check. Back-patch to 9.0 (all supported versions).
Popular free software standard library implementations do take pains to
bypass malloc() in simple cases, but they risk ENOMEM for floating point
numbers, positional arguments, large field widths, and large precisions.
No specification demands such caution, so this commit regards every call
to a printf family function as a potential threat.
Injecting the wrappers implicitly is a compromise between patch scope
and design goals. I would prefer to edit each call site to name a
wrapper explicitly. libpq and the ECPG libraries would, ideally, convey
errors to the caller rather than abort(). All that would be painfully
invasive for a back-patched security fix, hence this compromise.
Security: CVE-2015-3166
|
|
|
|
| |
The next commit needs it. Back-patch to 9.0 (all supported versions).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reentering this function with the right timing caused a double free,
typically crashing the backend. By synchronizing a disconnection with
the authentication timeout, an unauthenticated attacker could achieve
this somewhat consistently. Call be_tls_close() solely from within
proc_exit_prepare(). Back-patch to 9.0 (all supported versions).
Benkocs Norbert Attila
Security: CVE-2015-3165
|
|
|
|
|
| |
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3ce9e5ca72c3948b4c592e82a5ddb9b69b97d14b
|
| |
|
|
|
|
|
|
| |
DST law changes in Egypt, Mongolia, Palestine.
Historical corrections for Canada and Chile.
Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which
takes a lock on the buffer before zeroing it. However, you cannot take a
lock on a local buffer, and you got a segfault instead. The version of that
patch committed to master included a check for !isLocalBuf, and therefore
didn't crash, but oddly I missed that in the back-patched versions. This
patch adds that check to the back-branches too.
RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes.
WAL replay only deals with shared buffers, so the only way to trigger the
bug is with a temporary hash index.
Reported by Artem Ignatyev, analysis by Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a row that potentially violates a deferred exclusion constraint is
HOT-updated later in the same transaction, the exclusion constraint would
be reported as violated when the check finally occurs, even if the row(s)
the new row originally conflicted with have since been removed. This
happened because the wrong TID was passed to check_exclusion_constraint(),
causing the live HOT-updated row to be seen as a conflicting row rather
than recognized as the row-under-test.
Per bug #13148 from Evan Martin. It's been broken since exclusion
constraints were invented, so back-patch to all supported branches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Analysis by Noah Misch shows that the 25% threshold set by commit
53bb309d2d5a9432d2602c93ed18e58bd2924e15 is lower than any other,
similar autovac threshold. While we don't know exactly what value
will be optimal for all users, it is better to err a little on the
high side than on the low side. A higher value increases the risk
that users might exhaust the available space and start seeing errors
before autovacuum can clean things up sufficiently, but a user who
hits that problem can compensate for it by reducing
autovacuum_multixact_freeze_max_age to a value dependent on their
average multixact size. On the flip side, if the emergency cap
imposed by that patch kicks in too early, the user will experience
excessive wraparound scanning and will be unable to mitigate that
problem by configuration. The new value will hopefully reduce the
risk of such bad experiences while still providing enough headroom
to avoid multixact member exhaustion for most users.
Along the way, adjust the documentation to reflect the effects of
commit 04e6d3b877e060d8445eb653b7ea26b1ee5cec6b, which taught
autovacuum to run for multixact wraparound even when autovacuum
is configured off.
|
|
|
|
| |
Thomas Munro, with some adjustments by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c advanced the stop point
at vacuum time, but this has subsequently been shown to be unsafe as a
result of analysis by myself and Thomas Munro and testing by Thomas
Munro. The crux of the problem is that the SLRU deletion logic may
get confused about what to remove if, at exactly the right time during
the checkpoint process, the head of the SLRU crosses what used to be
the tail.
This patch, by me, fixes the problem by advancing the stop point only
following a checkpoint. This has the additional advantage of making
the removal logic work during recovery more like the way it works during
normal running, which is probably good.
At least one of the calls to DetermineSafeOldestOffset which this patch
removes was already dead, because MultiXactAdvanceOldest is called only
during recovery and DetermineSafeOldestOffset was set up to do nothing
during recovery. That, however, is inconsistent with the principle that
recovery and normal running should work similarly, and was confusing to
boot.
Along the way, fix some comments that previous patches in this area
neglected to update. It's not clear to me whether there's any
concrete basis for the decision to use only half of the multixact ID
space, but it's neither necessary nor sufficient to prevent multixact
member wraparound, so the comments should not say otherwise.
|
|
|
|
|
|
|
|
| |
Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c failed to take into
account the possibility that there might be no multixacts in existence
at all.
Report by Thomas Munro; patch by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c
and repaired in commits 669c7d20e6374850593cb430d332e11a3992bbcf and
7be47c56af3d3013955c91c2877c08f2a0e3e6a2 helps to ensure that we don't
overwrite old multixact member information while it is still needed,
but a user who creates many large multixacts can still exhaust the
member space (and thus start getting errors) while autovacuum stands
idly by.
To fix this, progressively ramp down the effective value (but not the
actual contents) of autovacuum_multixact_freeze_max_age as member space
utilization increases. This makes autovacuum more aggressive and also
reduces the threshold for a manual VACUUM to perform a full-table scan.
This patch leaves unsolved the problem of ensuring that emergency
autovacuums are triggered even when autovacuum=off. We'll need to fix
that via a separate patch.
Thomas Munro and Robert Haas
|
|
|
|
|
|
|
|
|
|
| |
The old formula didn't have enough parentheses, so it would do the wrong
thing, and it used / rather than % to find a remainder. The effect of
these oversights is that the stop point chosen by the logic introduced in
commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c might be rather
meaningless.
Thomas Munro, reviewed by Kevin Grittner, with a whitespace tweak by me.
|
|
|
|
|
|
|
|
|
|
|
| |
The Service Control Manager should be notified regularly during a shutdown
that takes a long time. Previously we would increaes the counter, but forgot
to actually send the notification to the system. The loop counter was also
incorrectly initalized in the event that the startup of the system took long
enough for it to increase, which could cause the shutdown process not to wait
as long as expected.
Krystian Bigaj, reviewed by Michael Paquier
|
|
|
|
|
|
|
|
| |
pg_win32_is_junction() was a typo for pgwin32_is_junction(). open()
was used not only in a two-argument form, which breaks on Windows,
but also where BasicOpenFile() should have been used.
Per reports from Andrew Dunstan and David Rowley.
|
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, if there's another crash, some writes from after the first
crash might make it to disk while writes from before the crash fail
to make it to disk. This could lead to data corruption.
Back-patch to all supported versions.
Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised
by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to create the pg_multixact/offsets file deleted by pg_upgrade
much earlier than we originally were: it was in TrimMultiXact(), which
runs after we exit recovery, but it actually needs to run earlier than
the first call to SetMultiXactIdLimit (before recovery), because that
routine already wants to read the first offset segment.
Per pg_upgrade trouble report from Jeff Janes.
While at it, silence a compiler warning about a pointless assert that an
unsigned variable was being tested non-negative. This was a signed
constant in Thomas Munro's patch which I changed to unsigned before
commit. Pointed out by Andres Freund.
|
|
|
|
|
|
| |
Reword messages, rename a confusingly named function.
Per Robert Haas.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multixact member files are subject to early wraparound overflow and
removal: if the average multixact size is above a certain threshold (see
note below) the protections against offset overflow are not enough:
during multixact truncation at checkpoint time, some
pg_multixact/members files would be removed because the server considers
them to be old and not needed anymore. This leads to loss of files that
are critical to interpret existing tuples's Xmax values.
To protect against this, since we don't have enough info in pg_control
and we can't modify it in old branches, we maintain shared memory state
about the oldest value that we need to keep; we use this during new
multixact creation to abort if an old still-needed file would get
overwritten. This value is kept up to date by checkpoints, which makes
it not completely accurate but should be good enough. We start emitting
warnings sometime earlier, so that the eventual multixact-shutdown
doesn't take DBAs completely by surprise (more precisely: once 20
members SLRU segments are remaining before shutdown.)
On troublesome average multixact size: The threshold size depends on the
multixact freeze parameters. The oldest age is related to the greater of
multixact_freeze_table_age and multixact_freeze_min_age: anything
older than that should be removed promptly by autovacuum. If autovacuum
is keeping up with multixact freezing, the troublesome multixact average
size is
(2^32-1) / Max(freeze table age, freeze min age)
or around 28 members per multixact. Having an average multixact size
larger than that will eventually cause new multixact data to overwrite
the data area for older multixacts. (If autovacuum is not able to keep
up, or there are errors in vacuuming, the actual maximum is
multixact_freeeze_max_age instead, at which point multixact generation
is stopped completely. The default value for this limit is 400 million,
which means that the multixact size that would cause trouble is about 10
members).
Initial bug report by Timothy Garnett, bug #12990
Backpatch to 9.3, where the problem was introduced.
Authors: Álvaro Herrera, Thomas Munro
Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner
|
|
|
|
|
| |
Fix an oversight in commit 151e74719b0cc5c040bd3191b51b95f925773dd1 by
back-patching commit 44c5d387eafb4ba1a032f8d7b13d85c553d69181 to 9.0.
|
|
|
|
|
|
|
|
|
|
|
| |
An outer join appearing within the RHS of an antijoin can't commute with
the antijoin, but somehow I missed teaching make_outerjoininfo() about
that. In Teodor Sigaev's recent trouble report, this manifests as a
"could not find RelOptInfo for given relids" error within eqjoinsel();
but I think silently wrong query results are possible too, if the planner
misorders the joins and doesn't happen to trigger any internal consistency
checks. It's broken as far back as we had antijoins, so back-patch to all
supported branches.
|
|
|
|
|
|
|
| |
Each of the libraries incorporates src/port files, which often check
FRONTEND. Build systems disagreed on whether to build libpgtypes this
way. Only libecpg incorporates files that rely on it today. Back-patch
to 9.0 (all supported versions) to forestall surprises.
|
|
|
|
|
|
|
|
|
|
|
| |
The cross-reference to set_append_rel_pathlist() was obsoleted by
commit e2fa76d80ba571d4de8992de6386536867250474, which split what
had been set_rel_pathlist() and child routines into two sets of
functions. But I (tgl) evidently missed updating this comment.
Back-patch to 9.2 to avoid unnecessary divergence among branches.
Amit Langote
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the startup process recovers transactions by scanning pg_twophase
directory, it should clear MyLockedGxact after it's done processing each
transaction. Like we do during normal operation, at PREPARE TRANSACTION.
Otherwise, if the startup process exits due to an error, it will try to
clear the locking_backend field of the last recovered transaction. That's
usually harmless, but if the error happens in MarkAsPreparing, while
holding TwoPhaseStateLock, the shmem-exit hook will try to acquire
TwoPhaseStateLock again, and deadlock with itself.
This fixes bug #13128 reported by Grant McAlister. The bug was introduced
by commit bb38fb0d, so backpatch to all supported versions like that
commit.
|
|
|
|
|
|
|
|
| |
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT
I introduced this ancient typo in subtrans.c and later propagated it to
multixact.c. I fixed the latter in f741300c, but only back to 9.3;
backpatch to all supported branches for consistency.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After a timeline switch, we would leave behind recycled WAL segments that
are in the future, but on the old timeline. After promotion, and after they
become old enough to be recycled again, we would notice that they don't have
a .ready or .done file, create a .ready file for them, and archive them.
That's bogus, because the files contain garbage, recycled from an older
timeline (or prealloced as zeros). We shouldn't archive such files.
This could happen when we're following a timeline switch during replay, or
when we switch to new timeline at end-of-recovery.
To fix, whenever we switch to a new timeline, scan the data directory for
WAL segments on the old timeline, but with a higher segment number, and
remove them. Those don't belong to our timeline history, and are most
likely bogus recycled or preallocated files. They could also be valid files
that we streamed from the primary ahead of time, but in any case, they're
not needed to recover to the new timeline.
|
|
|
|
| |
David Rowley
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was previously possible to have the launcher re-execute its main loop
before shutting down if some other signal was received or an error
occurred after getting SIGTERM, as reported by Qingqing Zhou.
While investigating, Tom Lane further noticed that if autovacuum had
been disabled in the config file, it would misbehave by trying to start
a new worker instead of bailing out immediately -- it would consider
itself as invoked in emergency mode.
Fix both problems by checking the shutdown flag in a few more places.
These problems have existed since autovacuum was introduced, so
backpatch all the way back.
|
|
|
|
|
|
|
| |
While gcc doesn't complain if you declare a function "static" and then
define it not-static, other compilers do; and in any case the code is
highly misleading this way. Add the missing "static" keywords to a
couple of recent patches. Per buildfarm member pademelon.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we would re-use input subexpressions in all expression trees
attached to a Join plan node. However, if it's an outer join and the
subexpression appears in the nullable-side input, this is potentially
incorrect for apparently-matching subexpressions that came from above
the outer join (ie, targetlist and qpqual expressions), because the
executor will treat the subexpression value as NULL when maybe it should
not be.
The case is fairly hard to hit because (a) you need a non-strict
subexpression (else NULL is correct), and (b) we don't usually compute
expressions in the outputs of non-toplevel plan nodes. But we might do
so if the expressions are sort keys for a mergejoin, for example.
Probably in the long run we should make a more explicit distinction between
Vars appearing above and below an outer join, but that will be a major
planner redesign and not at all back-patchable. For the moment, just hack
set_join_references so that it will not match any non-Var expressions
coming from nullable inputs to expressions that came from above the join.
(This is somewhat overkill, in that a strict expression could still be
matched, but it doesn't seem worth the effort to check that.)
Per report from Qingqing Zhou. The added regression test case is based
on his example.
This has been broken for a very long time, so back-patch to all active
branches.
|
|
|
|
|
|
|
|
|
|
|
| |
Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to
have the internal nblkno variable either. My compiler didn't complain
about that, but some did. I also rearranged the use of oblkno a bit to
make that case more parallel.
Report and initial patch by Petr Jelinek, rearranged a bit by me.
Back-patch to all branches, like the previous patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.
Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments. Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.
There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts. Therefore this is backpatched all the way back to 9.0.
To implement this, routines previously private to libpq have been
duplicated so that psql can decide what looks like a conninfo/URI
string. In back branches, just duplicate the same code all the way back
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.
In master, the routines are moved to src/common and renamed.
Author: David Fetter, Andrew Dunstan. Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
|
|
|
|
| |
Petr Jelinek
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As with initdb these programs need to run with a restricted token, and
if they don't pg_upgrade will fail when run as a user with Adminstrator
privileges.
Backpatch to all live branches. On the development branch the code is
reorganized so that the restricted token code is now in a single
location. On the stable bramches a less invasive change is made by
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.
Patches and bug report from Muhammad Asif Naeem, reviewed by Michael
Paquier, slightly edited by me.
|