| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sloppy refactoring in commit cca97ce6a caused these programs
to pass dbname = NULL to libpq if there was no "--dbname" switch
on the command line, where before "replication" would be passed.
This didn't break things completely, because the source server doesn't
care about the dbname specified for a physical replication connection.
However, it did cause libpq to fail to match a ~/.pgpass entry that
has "replication" in the dbname field. Restore the previous behavior
of passing "replication".
Also, closer inspection shows that if you do specify a dbname
in the connection string, that is what will be matched to ~/.pgpass,
not "replication". This was the pre-existing behavior so we should
not change it, but the SGML docs were pretty misleading about it.
Improve that.
Per bug #18685 from Toshi Harada. Back-patch to v17 where the
error crept in.
Discussion: https://postgr.es/m/18685-fee2dd142b9688f1@postgresql.org
Discussion: https://postgr.es/m/2702546.1730740456@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we sorted rules by schema name and then rule name;
if that wasn't unique, we sorted by rule OID. This can be
problematic for comparing dumps from databases with different
histories, especially since certain rule names like "_RETURN"
are very common. Let's make the sort key schema name, rule name,
table name, which should be unique. (This is the same behavior
we've long used for triggers and RLS policies.)
Andreas Karlsson
Discussion: https://postgr.es/m/b4e468d8-0cd6-42e6-ac8a-1d6afa6e0cf1@proxel.se
|
|
|
|
|
|
| |
Author: Tender Wang
Reviewed-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAHewXN%3D3sH2sNw4nC3QGCEVw1Lftmw9m5y1Xje0bXK6ApDrsPQ%40mail.gmail.com
|
|
|
|
|
|
|
| |
_bt_first doesn't necessarily hold onto a buffer pin on success exit.
Fix header comments that claimed that we'll always hold onto a pin.
Oversight in commit 2ed5b87f96.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove a local variable that was used to avoid overwriting strat_total
with the = operator strategy when a >= operator strategy key was already
included in the initial positioning/insertion scan keys by _bt_first
(for backwards scans it would have to be a <= key that was included).
_bt_first's strat_total local variable now simply tracks the operator
strategy of the final scan key that was included in the scan's insertion
scan key (barring the case where the !used_all_subkeys row compare path
adjusts strat_total in its own way).
_bt_first already treated >= keys (or <= keys) as = keys for initial
positioning purposes. There is no good reason to remember that that was
what happened; no later _bt_first step cares about the distinction.
Note, in particular, that the insertion scan key's 'nextkey' and
'backward' fields will be initialized the same way regardless.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=PKR6rB7qbx+Vnd7eqeB5VTcrW=iJvAsTsKbdG+kW_UA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Split ProcSleep into two functions: JoinWaitQueue and ProcSleep.
JoinWaitQueue is called while holding the partition lock, and inserts
the current process to the wait queue, while ProcSleep() does the
actual sleeping. ProcSleep() is now called without holding the
partition lock, and it no longer re-acquires the partition lock before
returning. That makes the wakeup a little cheaper. Once upon a time,
re-acquiring the partition lock was needed to prevent a signal handler
from longjmping out at a bad time, but these days our signal handlers
just set flags, and longjmping can only happen at points where we
explicitly run CHECK_FOR_INTERRUPTS().
If JoinWaitQueue detects an "early deadlock" before even joining the
wait queue, it returns without changing the shared lock entry, leaving
the cleanup of the shared lock entry to the caller. This makes the
handling of an early deadlock the same as the dontWait=true case.
One small user-visible side-effect of this refactoring is that we now
only set the 'ps' title to say "waiting" when we actually enter the
sleep, not when the lock is skipped because dontWait=true, or when a
deadlock is detected early before entering the sleep.
This eliminates the 'lockAwaited' global variable in proc.c, which was
largely redundant with 'awaitedLock' in lock.c
Note: Updating the local lock table is now the caller's responsibility.
JoinWaitQueue and ProcSleep are now only responsible for modifying the
shared state. Seems a little nicer that way.
Based on Thomas Munro's earlier patch and observation that ProcSleep
doesn't really need to re-acquire the partition lock.
Reviewed-by: Maxim Orlov
Discussion: https://www.postgresql.org/message-id/7c2090cd-a72a-4e34-afaa-6dd2ef31440e@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Suppose that you run a command like "pg_combinebackup b1 b2 -o output",
but both b1 and b2 contain an INCREMENTAL.$something file in a directory
that is expected to contain relation files. This is an error, but the
previous code would not detect the problem and instead write a garbage
full file named $something to the output directory. This commit adds
code to detect the error and a test case to verify the behavior.
It's difficult to imagine that this will ever happen unless someone
is intentionally trying to break incremental backup, but per discussion,
let's consider that the lack of adequate sanity checking in this area is
a bug and back-patch to v17, where incremental backup was introduced.
Patch by me, reviewed by Bertrand Drouvot and Amul Sul.
Discussion: http://postgr.es/m/CA+TgmoaD7dBYPqe7kMtO0dyto7rd0rUh7joh=JPUSaFszKY6Pg@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function is always called with a relative_path that ends in a
slash, so there's no need to insert a second one. So, don't. Instead,
add an assertion to verify that nothing gets broken in the future, and
adjust the comments.
While this is not a critical bug, the duplicate slash is visible in
error messages, which could create confusion, so back-patch to v17.
This is also better in that it keeps the code consistent across
branches.
Patch by me, reviewed by Bertrand Drouvot and Amul Sul.
Discussion: http://postgr.es/m/CA+TgmoaD7dBYPqe7kMtO0dyto7rd0rUh7joh=JPUSaFszKY6Pg@mail.gmail.com
|
|
|
|
|
|
|
|
| |
LockAcquire is a long and complex function. Pushing more stuff to its
subroutines makes it a little more manageable.
Reviewed-by: Maxim Orlov
Discussion: https://www.postgresql.org/message-id/7c2090cd-a72a-4e34-afaa-6dd2ef31440e@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, ProcSleep()'s caller was responsible for setting
MyProc->heldLocks, and we had comments to remind about that. But it
seems simpler to make ProcSleep() itself responsible for it.
ProcSleep() already set the other info about the lock its waiting for
(waitLock, waitProcLock and waitLockMode), so it is natural for it to
set heldLocks too.
Reviewed-by: Maxim Orlov
Discussion: https://www.postgresql.org/message-id/7c2090cd-a72a-4e34-afaa-6dd2ef31440e@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_bt_endpoint is a helper function for _bt_first that's called whenever
no useful insertion scan key can be used, and we need to lock and read
either the leftmost or rightmost leaf page in the index. Simplify and
document its preconditions, relieving its _bt_first caller from having
to end the parallel scan when it returns false.
Also stop unnecessarily invalidating the current scan position in nearby
code in both _bt_first and _bt_endpoint. This seems to have been
copy-pasted from _bt_readnextpage, where invalidating the scan's current
position really is necessary.
Follow-up to the refactoring work in commit 1bd4bc85.
|
|
|
|
|
|
|
|
| |
We reach this case also e.g. when a deadlock is detected, not only
when we run out of memory.
Reviewed-by: Maxim Orlov
Discussion: https://www.postgresql.org/message-id/7c2090cd-a72a-4e34-afaa-6dd2ef31440e@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Meson builds have PG_TEST_EXTRA as a configure-time variable,
which was not available in the Make builds. To ensure both build
systems are in sync, PG_TEST_EXTRA is now added as a configure-time
variable. It can be set like this:
./configure PG_TEST_EXTRA="kerberos, ssl, ..."
Note that to preserve the old behavior, this configure-time variable
is overridden by the PG_TEST_EXTRA environment variable when you run
the tests.
Author: Jacob Champion
Reviewed by: Ashutosh Bapat, Nazir Bilal Yavuz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"meson test" used to ignore the PG_TEST_EXTRA environment variable,
which meant that in order to run additional tests, you had to run
"meson setup -DPG_TEST_EXTRA=...". That's somewhat expensive, and not
consistent with autoconf builds. Allow PG_TEST_EXTRA environment
variable to override the setup-time option at run time, so that you
can do "PG_TEST_EXTRA=... meson test".
To implement this, the configuration time value is passed as an extra
"--pg-test-extra" argument to testwrap instead of adding it to the
test environment. If the environment variable is set at the time of
running test, testwrap uses the value from the environment variable
and ignores the --pg-test-extra option.
Now that "meson test" obeys the environment variable, we can remove it
from the "meson setup" steps in the CI script. It will now be picked
up from the environment variable like with "make check".
Author: Nazir Bilal Yavuzk, Ashutosh Bapat
Reviewed-by: Ashutosh Bapat with inputs from Tom Lane and Andrew Dunstan
|
|
|
|
|
|
| |
arrays.sql was already missing it before 49d6c7d8daba, and I have just
noticed it thanks to this commit. The second one in test_slru has been
introduced by 768a9fd5535f.
|
|
|
|
|
|
| |
Buildfarm member mamba fails to deduce that the function never uses this
variable without initializing it. Back-patch to v12, like commit
b412f402d1e020c5dac94f3bf4a005db69519b99.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A CacheInvalidateHeapTuple* callee might call
CatalogCacheInitializeCache(), which needs a relcache entry. Acquiring
a valid relcache entry might scan pg_class. Hence, to prevent
undetected LWLock self-deadlock, CacheInvalidateHeapTuple* callers must
not hold BUFFER_LOCK_EXCLUSIVE on buffers of pg_class. Move the
CacheInvalidateHeapTupleInplace() before the BUFFER_LOCK_EXCLUSIVE. No
back-patch, since I've reverted commit
243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704 from non-master branches.
Reported by Alexander Lakhin. Reviewed by Alexander Lakhin.
Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Commit a07e03fd8fa7daf4d1356f7cb501ffe784ea6257 enlarged the work done
here under the pg_class heap buffer lock. Two preexisting actions are
best done before holding that lock. Both RelationGetNumberOfBlocks()
and visibilitymap_count() do I/O, and the latter might exclusive-lock a
visibility map buffer. Moving these reduces contention and risk of
undetected LWLock deadlock. Back-patch to v12, like that commit.
Discussion: https://postgr.es/m/20241031200139.b4@rfd.leadboat.com
|
|
|
|
| |
Oversight in commit 5bf748b8.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of talking about setting latches, which is a pretty low-level
mechanism, emphasize that they wake up other processes.
This is in preparation for replacing Latches with a new abstraction.
That's still work in progress, but this seems a little tidier anyway,
so let's get this refactoring out of the way already.
Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c%40iki.fi
|
|
|
|
|
|
|
|
| |
This is in preparation for replacing Latches with a new abstraction.
That's still work in progress, but this seems a little tidier anyway,
so let's get this refactoring out of the way already.
Discussion: https://www.postgresql.org/message-id/391abe21-413e-4d91-a650-b663af49500c%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After a closer lookup, this makes the all-zero check of the page more
expensive, so let's remove the new function call in bufpage.c. The
maths of the check were also incorrect, checking that the page was full
of zeros only for the first 1kB.
This brings back the code to the state it was at 49d6c7d8daba.
Per discussion with David Rowley and Bertrand Drouvot.
Discussion: https://postgr.es/m/CAApHDvrXzPAr3FxoBuB7b3D-okNoNA2jxLun1rW8Yw5wkbqusw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new function tests if a memory region starting at a given location
for a defined length is made only of zeroes. This unifies in a single
path the all-zero checks that were happening in a couple of places of
the backend code:
- For pgstats entries of relation, checkpointer and bgwriter, where
some "all_zeroes" variables were previously used with memcpy().
- For all-zero buffer pages in PageIsVerifiedExtended().
This new function uses the same forward scan as the check for all-zero
buffer pages, applying it to the three pgstats paths mentioned above.
Author: Bertrand Drouvot
Reviewed-by: Peter Eisentraut, Heikki Linnakangas, Peter Smith
Discussion: https://postgr.es/m/ZupUDDyf1hHI4ibn@ip-10-97-1-34.eu-west-3.compute.internal
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function takes in input an array, and reverses the position of all
its elements. This operation only affects the first dimension of the
array, like array_shuffle().
The implementation structure is inspired by array_shuffle(), with a
subroutine called array_reverse_n() that may come in handy in the
future, should more functions able to reverse portions of arrays be
introduced.
Bump catalog version.
Author: Aleksander Alekseev
Reviewed-by: Ashutosh Bapat, Tom Lane, Vladlen Popolitov
Discussion: https://postgr.es/m/CAJ7c6TMpeO_ke+QGOaAx9xdJuxa7r=49-anMh3G5476e3CX1CA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch responds to a comment that I (tgl) made in the
discussion leading up to 774171c4f, that really all errors
occurring during raw parsing should provide error cursors.
Syntax errors reported by Bison will have one, and most of
the handwritten ereport's in gram.y already provide one,
but there were a few stragglers.
(It is not claimed that this handles every failure reachable
during raw parsing --- out-of-memory is an obvious exception.
But this makes a good start on cases that are likely to occur.)
While we're at it, clean up the reported positions for errors
associated with LIMIT/OFFSET clauses. Previously we were
relying on applying exprLocation() to the contained expressions,
but that leads to slightly odd cursor placement, e.g.
regression=# (select * from foo limit 10) limit 10;
ERROR: multiple LIMIT clauses not allowed
LINE 1: (select * from foo limit 10) limit 10;
^
We can afford to keep a little more state in the transient
SelectLimit structs in order to make that better.
Jian He and Tom Lane (extracted from a larger patch by Jian,
with some additional work by me)
Discussion: https://postgr.es/m/CACJufxEmONE3P2En=jopZy1m=cCCUs65M4+1o52MW5og9oaUPA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows an error cursor to be supplied for a bunch of
bad-function-definition errors that previously lacked one,
or that cheated a bit by pointing at the contained type name
when the error isn't really about that.
Bump catversion from an abundance of caution --- I don't think
this node type can actually appear in stored views/rules, but
better safe than sorry.
Jian He and Tom Lane (extracted from a larger patch by Jian,
with some additional work by me)
Discussion: https://postgr.es/m/CACJufxEmONE3P2En=jopZy1m=cCCUs65M4+1o52MW5og9oaUPA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Buildfarm member 'prion', which is configured with
-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE, failed with errors
like this:
ERROR: could not read blocks 0..0 in file "global/2672": read only 0 of 8192 bytes
while running a parallel test group that includes VACUUM FULL on some
catalog tables among other things. I was not able to reproduce that
just by running the tests with -DRELCACHE_FORCE_RELEASE
-DCATCACHE_FORCE_RELEASE, even though 'prion' hit it on first run
after commit 2b9b8ebbf8, so there might be something else that makes
it more susceptible to the race. However, I was able to reproduce it
by adding another test to the same test group that runs "vacuum full
pg_database" repeatedly.
The problem is that RelationReloadIndexInfo() no longer calls
RelationInitPhysicalAddr() on a nailed, shared index, when an
invalidation happens early during backend startup, before the critical
relcaches have been built. Before commit 2b9b8ebbf8, that was done by
RelationReloadNailed(), but it went missing from that path. Add it
back as an explicit step.
Broken by commit 2b9b8ebbf8, which refactored these functions.
Discussion: https://www.postgresql.org/message-id/db876575-8f5b-4193-a538-df7e1f92d47a%40iki.fi
|
|
|
|
|
|
|
|
| |
A few comments contained duplicate "the" in sentences, fix by removing
one occurrence.
Author: Vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/CALDaNm2aEEiPwGJmPdzBxROVvs8n75yCjKz4K1f1B2TdWpzxTA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old RelationClearRelation function did different things depending
on the arguments and circumstances. It could:
a) remove the relation completely from relcache (rebuild == false),
b) mark the entry as invalid (rebuild == true, but not in xact), or
c) rebuild the entry (rebuild == true).
Different callers used it for different purposes, and often assumed a
particular behavior, which was confusing. Split it into three
different functions, one for each of the above actions (one of them,
RelationInvalidateRelation, was already added in commit e6cd857726).
Move the responsibility of choosing the action and calling the right
function to the callers.
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/9c9e8908-7b3e-4ce7-85a8-00c0e165a3d6%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RelationClearRelation(rebuild == true) calls RelationReloadIndexInfo()
for indexes. We can rely on that in RelationIdGetRelation(), instead
of calling RelationReloadIndexInfo() directly. That simplifies the
code a little.
In the passing, add a comment in RelationBuildLocalRelation()
explaining why it doesn't call RelationInitIndexAccessInfo(). It's
because at index creation, it's called before the pg_index row has
been created. That's also the reason that RelationClearRelation()
still needs a special case to go through the full-blown rebuild if the
index support information in the relcache entry hasn't been populated
yet.
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/9c9e8908-7b3e-4ce7-85a8-00c0e165a3d6%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bf6c614a2 did some conversion work to use ExprState instead of manually
calling equality functions to check if one set of values is not distinct
from another set. That patch removed many of the fields that became
redundant as a result of that change, but it forgot to remove
SubPlanState.tab_eq_funcs. Fix that.
In passing, fix the header comment for TupleHashEntryData to correctly
spell the field name it's talking about.
Author: Rafia Sabih <rafia.pghackers@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CA+FpmFeycdombFzrjZw7Rmc29CVm4OOzCWwu=dVBQ6q=PX8SvQ@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrWR2jYVhec=COyF2g2BE_ns91NDsCHAMFiXbyhEujKdQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
9f00edc22888 has disabled a permutation due to failures in the CI for
FreeBSD environments, but this is a matter of timing. Let's document
properly why this type of permutation is a bad idea if relying on a wait
done in a SQL function, so as this can be avoided when implementing new
tests (this spec is also a template).
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/ZyCa2qsopKaw3W3K@paquier.xyz
|
|
|
|
|
|
|
|
|
|
| |
Follow-up to bugfix commit 763d65ae. Technically this new assertion is
redundant with the assertion recently added to _bt_readpage by that same
commit, but it seems like a good idea to have both.
The new assertion makes it clear that we expect to call _bt_readnextpage
when there's another primitive index scan scheduled, though only when
needed as the final step of ending the current primitive scan.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Strictly speaking, we only need to make sure to leave the scan's array
keys in their final positions (final for the current scan direction) to
handle SAOP array exhaustion because btgettuple might only return a
subset of the items for the final page (final for the current scan
direction), before the scan changes direction. While it's typical for
so->currPos to be invalidated shortly after the scan's arrays are first
exhausted, and while so->currPos invalidation does obviate the need to
leave the scan's arrays in any particular state, we can't rely on any of
that actually happening when handling array exhaustion. Adjust comments
to make all of that a lot clearer.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently, each iteration of the loop in sift_down() will perform
3 comparisons if both children are larger than the parent node (2
for comparing each child to the parent node, and a third to compare
the children to each other). By first comparing the children to
each other and then comparing the larger child to the parent node,
we can accomplish the same thing with just 2 comparisons (while
also not affecting the number of comparisons in any other case).
Author: ChangAo Chen
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/tencent_0142D8DA90940B9930BCC08348BBD6D0BB07%40qq.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
An operation like '12:34:56'::time_tz takes the UTC offset from
the prevailing time zone, which means that the results change
across DST transitions. One of the test cases added in ed055d249
failed to consider this.
Per report from Bernhard Wiedemann. Back-patch to v17, as the
test case was.
Discussion: https://postgr.es/m/ba8e1bc0-8a99-45b7-8397-3f2e94415e03@suse.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A bug in nbtree's handling of primitive index scan scheduling could lead
to wrong answers when a scrollable cursor was used with an index scan
that had a SAOP index qual. Wrong answers were only possible when the
scan direction changed after a primitive scan was scheduled, but before
_bt_next was asked to fetch the next tuple in line (i.e. for things to
break, _bt_next had to be denied the opportunity to step off the page in
the same direction as the one used when the primscan was scheduled).
Furthermore, the issue only occurred when the page in question happened
to be the first page to be visited by the entire top-level scan; the
issue hinged upon the cursor backing up to the absolute beginning of the
key space that it returns tuples from (fetching in the opposite scan
direction across a "primitive scan boundary" always worked correctly).
To fix, make _bt_next unset the "needs primitive index scan" flag when
it detects that the current scan direction is not the one that was used
by _bt_readpage back when the primitive scan in question was scheduled.
This fixes the cases that are known to be faulty, and also seems like a
good idea on general robustness grounds.
Affected scrollable cursor cases now avoid a spurious primitive index
scan when they fetch backwards to the absolute start of the key space to
be visited by their cursor. Fetching backwards now only returns those
tuples at the start of the scan, as expected. It'll also be okay to
once again fetch forwards from the start at that point, since the scan
will be left in a state that's exactly consistent with the state it was
in before any tuples were ever fetched, as expected.
Oversight in commit 5bf748b8, which enhanced nbtree ScalarArrayOp
execution.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wznv49bFsE2jkt4GuZ0tU2C91dEST=50egzjY2FeOcHL4Q@mail.gmail.com
Backpatch: 17-, where commit 5bf748b8 first appears.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* In DetachPartitionFinalize() we were applying a tuple conversion map
to tuples that didn't need one, which can lead to erratic behavior if
a partitioned table has a partition with a different column order, as
reported by Alexander Lakhin. This was introduced by 53af9491a043.
Don't do that. Also, modify a recently added test case to exercise
this.
* The same function as well as CloneFkReferenced() were acquiring
AccessShareLock on a partition, only to have CreateTrigger() later
acquire ShareRowExclusiveLock on it. This can lead to deadlock by
lock escalation, unnecessarily. Avoid that by acquiring the stronger
lock to begin with. This probably dates back to branch 12, but I have
never seen a report of this being a problem in the field.
* Innocuous but wasteful: also introduced by 53af9491a043, we were
reading a pg_constraint tuple from syscache that we don't need, as
reported by Tender Wang. Don't.
Backpatch to 15.
Discussion: https://postgr.es/m/461e9c26-2076-8224-e119-84998b6a784e@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
The test programs in src/common/unicode/ (case_test, category_test,
norm_test), don't build with meson if the nls option is enabled,
because a libintl dependency is missing. Fix that. (The makefiles
are ok.)
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/52db1d2b-4b96-473e-b323-a4b16a950fba%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit allows logical replication to publish and replicate generated
columns when explicitly listed in the column list. We also ensured that
the generated columns were copied during the initial tablesync when they
were published.
We will allow to replicate generated columns even when they are not
specified in the column list (via a new publication option) in a separate
commit.
The motivation of this work is to allow replication for cases where the
client doesn't have generated columns. For example, the case where one is
trying to replicate data from Postgres to the non-Postgres database.
Author: Shubham Khanna, Vignesh C, Hou Zhijie
Reviewed-by: Peter Smith, Hayato Kuroda, Shlok Kyal, Amit Kapila
Discussion: https://postgr.es/m/B80D17B2-2C8E-4C7D-87F2-E5B4BE3C069E@gmail.com
|
|
|
|
|
| |
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/98b2fcf0-f701-369e-d63d-6be9739ce17c@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit a07e03fd8fa7daf4d1356f7cb501ffe784ea6257 changed inplace updates
to wait for heap_update() commands like GRANT TABLE and GRANT DATABASE.
By keeping the pin during that wait, a sequence of autovacuum workers
and an uncommitted GRANT starved one foreground LockBufferForCleanup()
for six minutes, on buildfarm member sarus. Prevent, at the cost of a
bit of complexity. Back-patch to v12, like the earlier commit. That
commit and heap_inplace_lock() have not yet appeared in any release.
Discussion: https://postgr.es/m/20241026184936.ae.nmisch@google.com
|
|
|
|
|
|
|
| |
Historical corrections for Mexico, Mongolia, and Portugal.
Notably, Asia/Choibalsan is now an alias for Asia/Ulaanbaatar
rather than being a separate zone, mainly because the differences
between those zones were found to be based on untrustworthy data.
|
|
|
|
|
|
|
|
| |
Move the CreateStmt down to the branch that it's used in, thus
preventing the makeNode() call in cases where the CreateStmt isn't used.
Author: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/CAEudQAq=06YPWPhS+yyTbCwv5JLKRz8rm3dWx6JR5Uj_d_fQDA@mail.gmail.com
|
|
|
|
|
| |
Author: Anton Voloshin <a.voloshin@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/aa8a55d5-554a-4027-a491-1b0ca7c85f7a@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two functions that can be used in event triggers to get more
details about a rewrite happening on a relation. Both had a limited
documentation:
- pg_event_trigger_table_rewrite_reason() and
pg_event_trigger_table_rewrite_oid() were not mentioned in the main
event trigger section in the paragraph dedicated to the event
table_rewrite.
- pg_event_trigger_table_rewrite_reason() returns an integer which is a
bitmap of the reasons why a rewrite happens. There was no explanation
about the meaning of these values, forcing the reader to look at the
code to find out that these are defined in event_trigger.h.
While on it, let's add a comment in event_trigger.h where the
AT_REWRITE_* are defined, telling to update the documentation when
these values are changed.
Backpatch down to 13 as a consequence of 1ad23335f36b, where this area
of the documentation has been heavily reworked.
Author: Greg Sabino Mullane
Discussion: https://postgr.es/m/CAKAnmmL+Z6j-C8dAx1tVrnBmZJu+BSoc68WSg3sR+CVNjBCqbw@mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A pg_depend entry between a partitioned table and its table access
method was missing when using CREATE TABLE .. USING with an unpinned
access method. DROP ACCESS METHOD could be used, while it should be
blocked if CASCADE is not specified, even if there was a partitioned
table that depends on the table access method. pg_class.relam would
then hold an orphaned OID value still pointing to the AM dropped.
The problem is fixed by adding a dependency between the partitioned
table and its table access method if set when the relation is created.
A test checking the contents of pg_depend in this case is added.
Issue introduced in 374c7a229042, that has added support for CREATE
TABLE .. USING for partitioned tables.
Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/18674-1ef01eceec278fab@postgresql.org
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I assumed that all index_drop() callers set an active snapshot
beforehand, but that is evidently not true. One counterexample is
autovacuum, which doesn't set an active snapshot when cleaning up
orphan temp indexes. To fix, unconditionally push an active
snapshot before updating pg_index in index_drop().
Oversight in commit b52adbad46.
Reported-by: Masahiko Sawada
Reviewed-by: Stepan Neretin, Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoBgF9etQrXbN9or_YHsmBRJHHNUEkhHp9rGK9CyQv5aTQ%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
As threatened in the previous patch, define MaxAllocSize in
src/include/common/fe_memutils.h rather than having several
copies of it in different src/common/*.c files. This also
provides an opportunity to document it better.
While this would probably be safe to back-patch, I'll refrain
(for now anyway).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Coverity complained that pg_saslprep() could suffer integer overflow,
leading to under-allocation of the output buffer, if the input string
exceeds SIZE_MAX/4. This hazard seems largely hypothetical, but it's
easy enough to defend against, so let's do so.
This patch creates a third place in src/common/ where we are locally
defining MaxAllocSize so that we can test against that in the same way
in backend and frontend compiles. That seems like about two places
too many, so the next patch will move that into common/fe_memutils.h.
I'm hesitant to do that in back branches however.
Back-patch to v14. The code looks similar in older branches, but
before commit 67a472d71 there was a separate test on the input string
length that prevented this hazard.
Per Coverity report.
|