| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Attempting to use commit timestamps during bootstrapping leads to an
assertion failure, that can be reached for example with an initdb -c
that enables track_commit_timestamp. It makes little sense to register
a commit timestamp for a BootstrapTransactionId, so let's disable the
activation of the module in this case.
This problem has been independently reported once by each author of this
commit. Each author has proposed basically the same patch, relying on
IsBootstrapProcessingMode() to skip the use of commit_ts during
bootstrap. The test addition is a suggestion by me, and is applied down
to v16.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Author: Andy Fan <zhihuifan1213@163.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/OSCPR01MB14966FF9E4C4145F37B937E52F5102@OSCPR01MB14966.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/87plejmnpy.fsf@163.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If, after removal of useless null-constant arguments, a CoalesceExpr
has exactly one remaining argument, we can just take that argument as
the result, without bothering to wrap a new CoalesceExpr around it.
This isn't likely to produce any great improvement in runtime per se,
but it can lead to better plans since the planner no longer has to
treat the expression as non-strict.
However, there were a few regression test cases that intentionally
wrote COALESCE(x) as a shorthand way of creating a non-strict
subexpression. To avoid ruining the intent of those tests, write
COALESCE(x,x) instead. (If anyone ever proposes de-duplicating
COALESCE arguments, we'll need another iteration of this arms race.
But it seems pretty unlikely that such an optimization would be
worthwhile.)
Author: Maksim Milyutin <maksim.milyutin@tantorlabs.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/8e8573c3-1411-448d-877e-53258b7b2be0@tantorlabs.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits 8319e5cb5 et al missed the fact that ATPostAlterTypeCleanup
contains three calls to ATPostAlterTypeParse, and the other two
also need protection against passing a relid that we don't yet
have lock on. Add similar logic to those code paths, and add
some test cases demonstrating the need for it.
In v18 and master, the test cases demonstrate that there's a
behavioral discrepancy between stored generated columns and virtual
generated columns: we disallow changing the expression of a stored
column if it's used in any composite-type columns, but not that of
a virtual column. Since the expression isn't actually relevant to
either sort of composite-type usage, this prohibition seems
unnecessary; but changing it is a matter for separate discussion.
For now we are just documenting the existing behavior.
Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: CACJufxGKJtGNRRSXfwMW9SqVOPEMdP17BJ7DsBf=tNsv9pWU9g@mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
| |
This was previously harmless, but now that we create pg_constraint rows
for those, duplicates are not welcome anymore.
Backpatch to 18.
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CACJufxFSC0mcQ82bSk58sO-WJY4P-o4N6RD2M0D=DD_u_6EzdQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If certain constraint characteristic clauses (NO INHERIT, NOT VALID, NOT
ENFORCED) are given to CREATE CONSTRAINT TRIGGER, the resulting error
message is
ERROR: TRIGGER constraints cannot be marked NO INHERIT
which is a bit silly, because these aren't "constraints of type
TRIGGER". Hardcode a better error message to prevent it. This is a
cosmetic fix for quite a fringe problem with no known complaints from
users, so no backpatch.
While at it, silently accept ENFORCED if given.
Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAAJ_b97hd-jMTS7AjgU6TDBCzDx_KyuKxG+K-DtYmOieg+giyQ@mail.gmail.com
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
The COPY FROM command now accepts a non-negative integer for the HEADER option,
allowing multiple header lines to be skipped. This is useful when the input
contains multi-line headers that should be ignored during data import.
Author: Shinya Kato <shinya11.kato@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/CAOzEurRPxfzbxqeOPF_AGnAUOYf=Wk0we+1LQomPNUNtyZGBZw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently check_recovery_target_timeline() converts any value that is
not "current", "latest", or a valid integer to 0. So, for example, the
following configuration added to postgresql.conf followed by a startup:
recovery_target_timeline = 'bogus'
recovery_target_timeline = '9999999999'
... results in the following error patterns:
FATAL: 22023: recovery target timeline 0 does not exist
FATAL: 22023: recovery target timeline 1410065407 does not exist
This is confusing, because the server does not reflect the intention of
the user, and just reports incorrect data unrelated to the GUC.
The origin of the problem is that we do not perform a range check in the
GUC value passed-in for recovery_target_timeline. This commit improves
the situation by using strtou64() and by providing stricter range
checks. Some test cases are added for the cases of an incorrect, an
upper-bound and a lower-bound timeline value, checking the sanity of the
reports based on the contents of the server logs.
Author: David Steele <david@pgmasters.net>
Discussion: https://postgr.es/m/e5d472c7-e9be-4710-8dc4-ebe721b62cea@pgbackrest.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, we do not support Memoize for SEMI and ANTI joins because
nested loop SEMI/ANTI joins do not scan the inner relation to
completion, which prevents Memoize from marking the cache entry as
complete. One might argue that we could mark the cache entry as
complete after fetching the first inner tuple, but that would not be
safe: if the first inner tuple and the current outer tuple do not
satisfy the join clauses, a second inner tuple matching the parameters
would find the cache entry already marked as complete.
However, if the inner side is provably unique, this issue doesn't
arise, since there would be no second matching tuple. That said, this
doesn't help in the case of SEMI joins, because a SEMI join with a
provably unique inner side would already have been reduced to an inner
join by reduce_unique_semijoins.
Therefore, in this patch, we check whether the inner relation is
provably unique for ANTI joins and enable the use of Memoize in such
cases.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48FdLiMNrmJL-g6mDvoQVt0yNyJAqMkv4e2Pk-5GKCZLA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In commit fe07100e82, I renamed a couple of functions in
test_dsm_registry to make it clear what they are testing. However,
the buildfarm's cross-version upgrade tests run pg_upgrade with the
test modules installed, so this caused errors like:
ERROR: could not find function "get_val_in_shmem" in file ".../test_dsm_registry.so"
To fix, revert those renames. I could probably get away with only
un-renaming the C symbols, but I figured I'd avoid introducing
function name mismatches. Also, AFAICT the buildfarm's
cross-version upgrade tests do not run the test module tests
post-upgrade, else we'll need to properly version the extension.
Per buildfarm member crake.
Discussion: https://postgr.es/m/aGVuYUNW23tStUYs%40nathan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Presently, the dynamic shared memory (DSM) registry only provides
GetNamedDSMSegment(), which allocates a fixed-size segment. To use
the DSM registry for more sophisticated things like dynamic shared
memory areas (DSAs) or a hash table backed by a DSA (dshash), users
need to create a DSM segment that stores various handles and LWLock
tranche IDs and to write fairly complicated initialization code.
Furthermore, there is likely little variation in this
initialization code between libraries.
This commit introduces functions that simplify allocating a DSA or
dshash within the DSM registry. These functions are very similar
to GetNamedDSMSegment(). Notable differences include the lack of
an initialization callback parameter and the prohibition of calling
the functions more than once for a given entry in each backend
(which should be trivially avoidable in most circumstances). While
at it, this commit bumps the maximum DSM registry entry name length
from 63 bytes to 127 bytes.
Also note that even though one could presumably detach/destroy the
DSAs and dshashes created in the registry, such use-cases are not
yet well-supported, if for no other reason than the associated DSM
registry entries cannot be removed. Adding such support is left as
a future exercise.
The test_dsm_registry test module contains tests for the new
functions and also serves as a complete usage example.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/aEC8HGy2tRQjZg_8%40nathan
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The array-based variant of width_bucket() has always accepted NaN
inputs, treating them as equal but larger than any non-NaN,
as we do in ordinary comparisons. But up to now, the four-argument
variants threw errors for a NaN operand. This is inconsistent
and unnecessary, since we can perfectly well regard NaN as falling
after the last bucket.
We do still throw error for NaN or infinity histogram-bound inputs,
since there's no way to compute sensible bucket boundaries.
Arguably this is a bug fix, but given the lack of field complaints
I'm content to fix it in master.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/2822872.1750540911@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to alter a constraint so that it becomes NOT VALID results in an
error that assumes the constraint is a foreign key. This is potentially
wrong, so give a more generic error message.
While at it, give CREATE CONSTRAINT TRIGGER a better error message as
well.
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recent nbtree bugfix commit 5f4d98d4 added a special case to the code
that sets up a page-level prefix of keys that are definitely satisfied
by every tuple on the page: whenever _bt_set_startikey reached a row
compare key, we'd refuse to apply the pstate.forcenonrequired behavior
in scans where that usually happens (scans with a higher-order array
key). That hack made the scan avoid essentially the same infinite
cycling behavior that also affected nbtree scans with redundant keys
(keys that preprocessing could not eliminate) prior to commit f09816a0.
There are now serious doubts about this row compare workaround.
Testing has shown that a scan with a row compare key and an array key
could still read the same leaf page twice (without the scan's direction
changing), which isn't supposed to be possible following the SAOP
enhancements added by Postgres 17 commit 5bf748b8. Also, we still
allowed a required row compare key to be used with forcenonrequired mode
when its header key happened to be beyond the pstate.ikey set by
_bt_set_startikey, which was complicated and brittle.
The underlying problem was that row compares had inconsistent rules
around how scans start (which keys can be used for initial positioning
purposes) and how scans end (which keys can set continuescan=false).
Quals with redundant keys that could not be eliminated by preprocessing
also had that same quality to them prior to today's bugfix f09816a0. It
now seems prudent to bring row compare keys in line with the new charter
for required keys, by making the start and end rules symmetric.
This commit fixes two points of disagreement between _bt_first and
_bt_check_rowcompare. Firstly, _bt_check_rowcompare was capable of
ending the scan at the point where it needed to compare an ISNULL-marked
row compare member that came immediately after a required row compare
member. _bt_first now has symmetric handling for NULL row compares.
Secondly, _bt_first had its own ideas about which keys were safe to use
for initial positioning purposes. It could use fewer or more keys than
_bt_check_rowcompare. _bt_first now uses the same requiredness markings
as _bt_check_rowcompare for this.
Now that _bt_first and _bt_check_rowcompare agree on how to start and
end scans, we can get rid of the forcenonrequired special case, without
any risk of infinite cycling. This approach also makes row compare keys
behave more like regular scalar keys, particularly within _bt_first.
Fixing these inconsistencies necessitates dealing with a related issue
with the way that row compares were marked required by preprocessing: we
didn't mark any lower-order row members required following 2016 bugfix
commit a298a1e0. That approach was over broad. The bug in question was
actually an oversight in how _bt_check_rowcompare dealt with tuple NULL
values that failed to satisfy a scan key marked required in the opposite
scan direction (it was a bug in 2011 commits 6980f817 and 882368e8, not
a bug in 2006 commit 3a0a16cb). Go back to marking row compare members
as required using the original 2006 rules, and fix the 2016 bug in a
more principled way: by limiting use of the "set continuescan=false with
a key required in the opposite scan direction upon encountering a NULL
tuple value" optimization to the first/most significant row member key.
While it isn't safe to use an implied IS NOT NULL qualifier to end the
scan when it comes from a required lower-order row compare member key,
it _is_ generally safe for such a required member key to end the scan --
provided the key is marked required in the _current_ scan direction.
This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275. It is a direct follow-up to today's commit f09816a0.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=pcijHL_mA0_TJ5LiTB28QpQ0cGtT-ccFV=KzuunNDDQ@mail.gmail.com
Backpatch-through: 18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous minimum was to maintain support for Python 3.5, but we
now require Python 3.6 anyway (commit 45363fca637), so that reason is
obsolete. A small raise to Meson 0.57 allows getting rid of a fair
amount of version conditionals and silences some future-deprecated
warnings.
With the version bump, the following deprecation warnings appeared and
are fixed:
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead.
It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so
the minimum is actually set to 0.57.2. This is specific to this
version series; in the future we won't necessarily need to be this
precise.
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This provides a convenient way to look up a database's OID. For
example, the query
SELECT * FROM pg_shdepend
WHERE dbid = (SELECT oid FROM pg_database
WHERE datname = current_database());
can now be simplified to
SELECT * FROM pg_shdepend
WHERE dbid = current_database()::regdatabase;
Like the regrole type, regdatabase has cluster-wide scope, so we
disallow regdatabase constants from appearing in stored
expressions.
Bumps catversion.
Author: Ian Lawrence Barwick <barwick@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBpjJhyHpM2LYcG0%40nathan
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the method is called in scalar context and we didn't pass in a stderr
handle, one won't be created. However, some error paths assume that it
exists, so in this case create a dummy stderr to avoid the resulting
perl error.
Per gripe from Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru> and
adapted from his patch.
Discussion: https://postgr.es/m/378eac5de4b8ecb5be7bcdf2db9d2c4d@postgrespro.ru
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
log_line_prefix is changed to include "%b", the backend type in the TAP
test configuration. %v and %x are removed from the CI configuration,
with the format around %b changed.
The lack of backend type in postgresql.conf set by Cluster.pm for the
TAP test configuration was something that has been bugging me, beginning
the discussion that has led to this change. The change in the CI has
come up during the discussion, to become consistent with pg_regress.c,
%v and %x not being that useful to have.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aC0VaIWAXLgXcHVP@paquier.xyz
|
|
|
|
|
|
|
| |
This is required before the creation of a new branch. pgindent is
clean, as well as is reformat-dat-files.
perltidy version is v20230309, as documented in pgindent's README.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the wake of commit a16ef313f, we need to deal with more cases
involving PlaceHolderVars in NestLoopParams than we did before.
For one thing, a16ef313f was incorrect to suppose that we could
rely on the required-outer relids of the lefthand path to decide
placement of nestloop-parameter PHVs. As Richard Guo argued at
the time, we must look at the required-outer relids of the join
path itself.
For another, we have to apply replace_nestloop_params() to such
a PHV's expression, in case it contains references to values that
will be supplied from NestLoopParams of higher-level nestloops.
For another, we need to be more careful about the phnullingrels
of the PHV than we were being. identify_current_nestloop_params
only bothered to ensure that the phnullingrels didn't contain
"too many" relids, but now it has to be exact, because setrefs.c
will apply both NRM_SUBSET and NRM_SUPERSET checks in different
places. We can compute the correct relids by determining the
set of outer joins that should be able to null the PHV and then
subtracting whatever's been applied at or below this join.
Do the same for plain Vars, too. (This should make it possible
to use NRM_EQUAL to process nestloop params in setrefs.c, but
I won't risk making such a change in v18 now.)
Lastly, if a nestloop parameter PHV was pulled up out of a subquery
and it contains a subquery that was originally pushed down from this
query level, then that will still be represented as a SubLink, because
SS_process_sublinks won't recurse into outer PHVs, so it didn't get
transformed during expression preprocessing in the subquery. We can
substitute the version of the PHV's expression appearing in its
PlaceHolderInfo to ensure that that preprocessing has happened.
(Seems like this processing sequence could stand to be redesigned,
but again, late in v18 development is not the time for that.)
It's not very clear to me why the old have_dangerous_phv join-order
restriction prevented us from seeing the last three of these problems.
But given the lack of field complaints, it must have done so.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18953-1c9883a9d4afeb30@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes a table's constraint may depend on a column of another
table, so that we have to update the constraint when changing the
referenced column's type. We need to have lock on the constraint's
table to do that. ATPostAlterTypeCleanup believed that this case
was only possible for FOREIGN KEY constraints, but it's wrong at
least for CHECK and EXCLUDE constraints; and in general, we'd
probably need exclusive lock to alter any sort of constraint.
So just remove the contype check and acquire lock for any other
table. This prevents a "you don't have lock" assertion failure,
though no ill effect is observed in production builds.
We'll error out later anyway because we don't presently support
physically altering column types within stored composite columns.
But the catalog-munging is basically all there, so we may as well
make that part work.
Bug: #18970
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Diagnosed-by: jian he <jian.universality@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18970-a7d1cfe1f8d5d8d9@postgresql.org
Backpatch-through: 13
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is purely cosmetic, as dsm_attach() interprets its argument as
a dsm_handle (i.e., an unsigned integer), but we might as well fix
it.
Oversight in commit 4db3744f1f.
Author: Jianghua Yang <yjhjstz@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAAZLFmRxkUD5jRs0W3K%3DUe4_ZS%2BRcAb0PCE1S0vVJBn3sWH2UQ%40mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We were missing collecting comments for not-null constraints that are
dumped inline with the table definition (i.e., valid ones), because they
aren't represented by a separately dumpable object. Fix by creating
separate TocEntries for the comments.
Co-Authored-By: Jian He <jian.universality@gmail.com>
Co-Authored-By: Álvaro Herrera <alvherre@kurilemu.de>
Reported-By: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-By: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/d50ff977-c728-4e9e-8488-fc2688e08754@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 14e87ffa5c5 introduced support for adding comments to NOT NULL
constraints. However, CREATE TABLE LIKE INCLUDING COMMENTS did not copy
these comments to the new table. This was an oversight in that commit.
This commit corrects the behavior by ensuring CREATE TABLE LIKE to also copy
the comments on NOT NULL constraints when INCLUDING COMMENTS is specified.
Author: Jian He <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/127debef-e558-4784-9e24-0d5eaf91e2d1@oss.nttdata.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the subcommand ALTER COLUMN TYPE of the ALTER TABLE command, the
USING expression may reference virtual generated columns. These
columns must be expanded before the expression is fed through
expression_planner and the expression-execution machinery. Failing to
do so can result in incorrect rewrite decisions, and can also lead to
"ERROR: unexpected virtual generated column reference".
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/b5f96b24-ccac-47fd-9e20-14681b894f36@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just like selecting from a view is exploitable (CVE-2024-7348),
selecting from a table with virtual generated columns is exploitable.
Users who are concerned about this can avoid selecting from views, but
telling them to avoid selecting from tables is less practical.
To address this, this changes it so that generation expressions for
virtual generated columns are restricted to using built-in functions
and types, and the columns are restricted to having a built-in type.
We assume that built-in functions and types cannot be exploited for
this purpose.
In the future, this could be expanded by some new mechanism to declare
other functions and types as safe or trusted for this purpose, but
that is to be designed.
(An alternative approach might have been to expand the
restrict_nonsystem_relation_kind GUC to handle this, like the fix for
CVE-2024-7348. But that is kind of an ugly approach. That fix had to
fit in the constraints of fixing an ancient vulnerability in all
branches. Since virtual generated columns are new, we're free from
the constraints of the past, and we can and should use cleaner
options.)
Reported-by: Feike Steenbergen <feikesteenbergen@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAK_s-G2Q7de8Q0qOYUR%3D_CTB5FzzVBm5iZjOp%2BmeVWpMpmfO0w%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes two issues with the handling of VacuumParams in vacuum_rel().
This code path has the idea to change the passed-in pointer of
VacuumParams for the "truncate" and "index_cleanup" options for the
relation worked on, impacting the two following scenarios where
incorrect options may be used because a VacuumParams pointer is shared
across multiple relations:
- Multiple relations in a single VACUUM command.
- TOAST relations vacuumed with their main relation.
The problem is avoided by providing to the two callers of vacuum_rel()
copies of VacuumParams, before the pointer is updated for the "truncate"
and "index_cleanup" options.
The refactoring of the VACUUM option and parameters done in 0d831389749a
did not introduce an issue, but it has encouraged the problem we are
dealing with in this commit, with b84dbc8eb80b for "truncate" and
a96c41feec6b for "index_cleanup" that have been added a couple of years
after the initial refactoring. HEAD will be improved with a different
patch that hardens the uses of VacuumParams across the tree. This
cannot be backpatched as it introduces an ABI breakage.
The backend portion of the patch has been authored by Nathan, while I
have implemented the tests. The tests rely on injection points to check
the option values, making them faster, more reliable than the tests
originally proposed by Shihao, and they also provide more coverage.
This part can only be backpatched down to v17.
Reported-by: Shihao Zhong <zhong950419@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAGRkXqTo+aK=GTy5pSc-9cy8H2F2TJvcrZ-zXEiNJj93np1UUw@mail.gmail.com
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If vacuum fails to prune a tuple killed before OldestXmin, it will
decide to freeze its xmax and later error out in pre-freeze checks.
Add a test reproducing this scenario to the recovery suite which creates
a table on a primary, updates the table to generate dead tuples for
vacuum, and then, during the vacuum, uses a replica to force
GlobalVisState->maybe_needed on the primary to move backwards and
precede the value of OldestXmin set at the beginning of vacuuming the
table.
This test is coverage for a case fixed in 83c39a1f7f3. The test was
originally committed to master in aa607980aee but later reverted in
efcbb76efe4 due to test instability.
The test requires multiple index passes. In Postgres 17+, vacuum uses a
TID store for the dead TIDs that is very space efficient. With the old
minimum maintenance_work_mem of 1 MB, it required a large number of dead
rows to generate enough dead TIDs to force multiple index
vacuuming passes. Once the source code changes were made to allow a
minimum maintenance_work_mem value of 64kB, the test could be made much
faster and more stable.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAAKRu_ZJBkidusDut6i%3DbDCiXzJEp93GC1%2BNFaZt4eqanYF3Kw%40mail.gmail.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
| |
Virtual generated columns have some special checks in
CheckAttributeType(), mainly to check that domains are not used. But
this check was only applied during CREATE TABLE, not during ALTER
TABLE. This fixes that.
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/CACJufxE0KHR__-h=zHXbhSNZXMMs4LYo4-dbj8H3YoStYBok1Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
\close has been introduced in d55322b0da60 to be able to close a
prepared statement using the extended protocol in psql. Per discussion,
the name "close" is ambiguous. At the SQL level, CLOSE is used to close
a cursor. At protocol level, the close message can be used to either
close a statement or a portal.
This patch renames \close to \close_prepared to avoid any ambiguity and
make it clear that this is used to close a prepared statement. This new
name has been chosen based on the feedback from the author and the
reviewers.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/3e694442-0df5-4f92-a08f-c5d4c4346b85@eisentraut.org
|
|
|
|
|
|
|
|
|
|
| |
This new test was intended to check the handling of the replication slot's
restart lsn fixed in ca307d5cec90. However, it also reveals another issue
related to logical decoding. This commit temporarily removes this test to
keep the buildfarm and CFbot green and avoid distorting others' work. This
test will be restored once we investigate and fix the issue.
Discussion: https://postgr.es/m/CAAKRu_ZCOzQpEumLFgG_%2Biw3FTa%2BhJ4SRpxzaQBYxxM_ZAzWcA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 85e5e222b, which added (a forerunner of) this logic,
argued that
Adding the necessary complexity to make this work doesn't seem like
it would be repaid in significantly better plans, because in cases
where such a PHV exists, there is probably a corresponding join order
constraint that would allow a good plan to be found without using the
star-schema exception.
The flaw in this claim is that there may be other join-order
restrictions that prevent us from finding a join order that doesn't
involve a "dangerous" PHV. In particular we now recognize that
small join_collapse_limit or from_collapse_limit could prevent it.
Therefore, let's bite the bullet and make the case work.
We don't have to extend the executor's support for nestloop parameters
as I thought at the time, because we can instead push the evaluation
of the placeholder's expression into the left-hand input of the
NestLoop node. So there's not really a lot of downside to this
solution, and giving the planner more join-order flexibility should
have value beyond just avoiding failure.
Having said that, there surely is a nonzero risk of introducing
new bugs. Since this failure mode escaped detection for ten years,
such cases don't seem common enough to justify a lot of risk.
Therefore, let's put this fix into master but leave the back branches
alone (for now anyway).
Bug: #18953
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Diagnosed-by: Richard Guo <guofenglinux@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18953-1c9883a9d4afeb30@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The TAP tests that verify logical and physical replication slot behavior
during checkpoints (046_checkpoint_logical_slot.pl and
047_checkpoint_physical_slot.pl) inserted two batches of 2 million rows each,
generating approximately 520 MB of WAL. On slow machines, or when compiled
with '-DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE', this caused the
tests to run for 8-9 minutes and occasionally time out, as seen on the
buildfarm animal prion.
This commit modifies the mentioned tests to utilize the $node->advance_wal()
function, thereby reducing runtime. Once we do not use the generated data,
the proposed function is a good alternative, which cuts the total wall-clock
run time.
While here, remove superfluous '\n' characters from several note() calls;
these appeared literally in the build-farm logs and looked odd. Also, remove
excessive 'shared_preload_libraries' GUC from the config and add a check for
'injection_points' extension availability.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Vitaly Davydov <v.davydov@postgrespro.ru>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/fbc5d94e-6fbd-4a64-85d4-c9e284a58eb2%40gmail.com
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our maintenance of typedefs.list has been a little haphazard
(and apparently we can't alphabetize worth a darn). Replace
the file with the authoritative list from our buildfarm, and
run pgindent using that.
I also updated the additions/exclusions lists in pgindent where
necessary to keep pgindent from messing things up significantly.
Notably, now that regex_t and some related names are macros not real
typedefs, we have to whitelist them explicitly. The exclusions list
has also drifted noticeably, presumably due to changes of system
headers on the buildfarm animals that contribute to the list.
Unlike in prior years, I've not manually added typedef names that
are missing from the buildfarm's list because they are not used to
declare any variables or fields. So there are a few places where
the typedef declaration itself is formatted worse than before,
e.g. typedef enum IoMethod. I could preserve the names that were
manually added to the list previously, but I'd really prefer to find
a less manual way of dealing with these cases. A quick grep finds
about 75 such symbols, most of which have never gotten any special
treatment.
Per discussion among pgsql-release, doing this now seems appropriate
even though we're still a week or two away from making the v18 branch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new tests verify that logical and physical replication slots are still
valid after an immediate restart on checkpoint completion when the slot was
advanced during the checkpoint.
This commit introduces two new injection points to make these tests possible:
* checkpoint-before-old-wal-removal - triggered in the checkpointer process
just before old WAL segments cleanup;
* logical-replication-slot-advance-segment - triggered in
LogicalConfirmReceivedLocation() when restart_lsn was changed enough to
point to the next WAL segment.
Discussion: https://postgr.es/m/flat/1d12d2-67235980-35-19a406a0%4063439497
Author: Vitaly Davydov <v.davydov@postgrespro.ru>
Author: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Running COPY within a pipeline can break protocol synchronization in
multiple ways. psql is limited in terms of result processing if mixing
COPY commands with normal queries while controlling a pipeline with the
new meta-commands, as an effect of the following reasons:
- In COPY mode, the backend ignores additional Sync messages and will
not send a matching ReadyForQuery expected by the frontend. Doing a
\syncpipeline just after COPY will leave the frontend waiting for a
ReadyForQuery message that won't be sent, leaving psql out-of-sync.
- libpq automatically sends a Sync with the Copy message which is not
tracked in the command queue, creating an unexpected synchronisation
point that psql cannot really know about. While it is possible to track
such activity for a \copy, this cannot really be done sanely with plain
COPY queries. Backend failures during a COPY would leave the pipeline
in an aborted state while the backend would be in a clean state, ready
to process commands.
At the end, fixing those issues would require modifications in how libpq
handles pipeline and COPY. So, rather than implementing workarounds in
psql to shortcut the libpq internals (with command queue handling for
one), and because meta-commands for pipelines in psql are a new feature
with COPY in a pipeline having a limited impact compared to other
queries, this commit forbids the use of COPY within a pipeline to avoid
possible break of protocol synchronisation within psql. If there is a
use-case for COPY support within pipelines in libpq, this could always
be added in the future, if necessary.
Most of the changes of this commit impacts the tests for psql pipelines,
removing the tests related to COPY. Some TAP tests still exist for COPY
TO/FROM and \copy to/from, to check that that connections are aborted
when this operation is attempted.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/AC468509-06E8-4E2A-A4B1-63046A4AC6AB@postgrespro.ru
|
|
|
|
|
|
|
|
| |
Oversight in commit 8b2bcf3f28.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aECi_gSD9JnVWQ8T%40nathan
Backpatch-through: 17
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to commit cc733ed164c5: when an unenforced foreign key that
references a partitioned table is altered to be enforced, we scan
the constrained table based on each partition on the referenced
partitioned table. This is bogus and likely to cause the ALTER TABLE to
fail: we must only scan the constrained table as pointing to the
top-level partitioned table. Oversight in commit eec0040c4bcd. Fix by
eliding those scans.
Author: Amul Sul <sulamul@gmail.com>
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF1e_gPOLtsDoaE4VCgQPC8KZW_kPAjPR5Rvv4Ew=fb2A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Validating an unvalidated foreign key that references a partitioned
table would try to queue validations for each individual partition of
the referenced table, but this is wrong: each individual partition would
not necessarily have all the referenced rows, so errors would be raised.
Avoid doing that. The pg_constraint rows that cause this to happen are
only there to support the action triggers that implement the DELETE/
UPDATE actions of the FK, so no validating scan is necessary.
This was an oversight in commit b663b9436e75.
An equivalent oversight exists for NOT ENFORCED constraints, which is
not fixed in this commit.
Author: Amul Sul <sulamul@gmail.com>
Reported-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/26983.1748418675@localhost
|
|
|
|
|
|
|
|
|
| |
The choices made in commit 01463e1cc might pose copyright hazards,
and are more cutesy than informative anyway.
Reported-by: Noah Misch <noah@leadboat.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20250415155850.9b.nmisch@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 7406ab623fe added a gist support function that we internally
refer to by the symbol GIST_STRATNUM_PROC. This translated from
"well-known" strategy numbers to opfamily-specific strategy numbers.
However, we later (commit 630f9a43cec) changed this to fit into
index-AM-level compare type mapping, so this function actually now
maps from compare type to opfamily-specific strategy numbers. So this
name is no longer fitting.
Moreover, the index AM level also supports the opposite, a function to
map from strategy number to compare type. This is currently not
supported in gist, but one might wonder what this function is supposed
to be called when it is added.
This patch changes the naming of the gist-level functionality to be
more in line with the index-AM-level functionality. This makes sense
because these are essentially the same thing on different levels.
This also changes the names of the externally visible functions that
are provided for use as such a support function.
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/37ebb1d9-9036-485f-a215-e55435689917%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a MERGE's target table is the parent of an inheritance tree, any
INSERT actions insert into the parent table using ModifyTableState's
rootResultRelInfo. However, there are two bugs in the way is
initialized:
1. ExecInitMerge() incorrectly uses a different ResultRelInfo entry
from ModifyTableState's resultRelInfo array to build the insert
projection, which may not be compatible with rootResultRelInfo.
2. ExecInitModifyTable() does not fully initialize rootResultRelInfo.
Specifically, ri_WithCheckOptions, ri_WithCheckOptionExprs,
ri_returningList, and ri_projectReturning are not initialized.
This can lead to crashes, or incorrect query results due to failing to
check WCO's or process the RETURNING list for INSERT actions.
Fix both these bugs in ExecInitMerge(), noting that it is only
necessary to fully initialize rootResultRelInfo if the MERGE has
INSERT actions and the target table is a plain inheritance parent.
Backpatch to v15, where MERGE was introduced.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/4rlmjfniiyffp6b3kv4pfy4jw3pciy6mq72rdgnedsnbsx7qe5@j5hlpiwdguvc
Backpatch-through: 15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ParseFraction only expects to deal with fields that contain a decimal
point and digit(s). However it's possible in some edge cases for it
to be passed input that doesn't look like that. In particular the
input could look like a valid floating-point number, such as ".123e6".
strtod() will happily eat that, possibly producing a result that is
not within the expected range 0..1, which can result in integer
overflow in the callers. That doesn't have any security consequences,
but it's still not very desirable. Fix by checking that the input
has the expected form.
Similarly, DecodeNumberField only expects to deal with fields that
contain a decimal point and digit(s), but it's sometimes abused to
parse strings that might not look like that. This could result in
failure to reject bogus input, yielding silly results. Again, fix
by rejecting input that doesn't look as-expected. That decision
also means that we can affirmatively answer the very old comment
questioning whether we couldn't save some duplicative code by
using ParseFractionalSecond here.
While these changes should only reject input that nobody would
consider valid, it still doesn't seem like a change to make in
stable branches. Apply to HEAD only.
Reported-by: Evgeniy Gorbanev <gorbanev.es@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1328335.1748371099@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
| |
As written, the test was throwing an error because of an unbalanced
parenthesis. The regex used in the test is adjusted to not fail and to
test the case of an opening parenthesis in a character class after some
nested square brackets.
Oversight in d46911e584d4.
Discussion: https://postgr.es/m/16ab039d1af455652bdf4173402ddda145f2c73b.camel@cybertec.at
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code that translates SIMILAR TO pattern matching expressions to
POSIX-style regular expressions did not consider that square brackets
can be nested. For example, in an expression like [[:alpha:]%_], the
logic replaced the placeholders '_' and '%' but it should not.
This commit fixes the conversion logic by tracking the nesting level of
square brackets marking character class areas, while considering that
in expressions like []] or [^]] the first closing square bracket is a
regular character. Multiple tests are added to show how the conversions
should or should not apply applied while in a character class area, with
specific cases added for all the characters converted outside character
classes like an opening parenthesis '(', dollar sign '$', etc.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/16ab039d1af455652bdf4173402ddda145f2c73b.camel@cybertec.at
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test did not wait for all the subscriptions to have caught up when
dropping the subscription "tab_copy". In a slow environment, it could
be possible for the replay of the COMMIT PREPARED transaction "mygid"
to not be confirmed yet, causing one prepared transaction to be left
around before moving to the next steps of the test.
One failure noticed is a transaction found in pg_prepared_xacts for the
cases where copy_data = false and two_phase = true, but there should be
none after dropping the subscription.
As an extra safety measure, a check is added before dropping the
subscription, scanning pg_prepared_xacts to make sure that no prepared
transactions are left once both subscriptions have caught up.
Issue introduced by a8fd13cab0ba, fixing a problem similar to
eaf5321c3524.
Per buildfarm member kestrel.
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CALDaNm329QaZ+bwU--bW6GjbNSZ8-38cDE8QWofafub7NV67oA@mail.gmail.com
Backpatch-through: 15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check the ctx->nested level as we go, to prevent a server from running
the client out of stack space.
The limit we choose when communicating with authorization servers can't
be overly strict, since those servers will continue to add extensions in
their JSON documents which we need to correctly ignore. For the SASL
communication, we can be more conservative, since there are no defined
extensions (and the peer is probably more Postgres code).
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/CAOYmi%2Bm71aRUEi0oQE9ciBnBS8xVtMn3CifaPu2kmJzUfhOZgA%40mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
Due to concerns raised about the approach, and memory leaks found
in sensitive contexts the functionality is reverted. This reverts
commits 45e7e8ca9, f8c115a6c, d2a1ed172, 55ef7abf8 and 042a66291
for v18 with an intent to revisit this patch for v19.
Discussion: https://postgr.es/m/594293.1747708165@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
9219093cab2607f modularized log_connections output to allow more
granular control over which aspects of connection establishment are
logged. It converted the boolean log_connections GUC into a list of strings
and deprecated previously supported boolean-like values on, off, true,
false, 1, 0, yes, and no. Those values still work, but they are
supported mainly for backwards compatability. As such, documented
examples of log_connections should not use these deprecated values.
Update references in the docs to deprecated log_connections values. Many
of the tests use log_connections. This commit also updates the tests to
use the new values of log_connections. In some of the tests, the updated
log_connections value covers a narrower set of aspects (e.g. the
'authentication' aspect in the tests in src/test/authentication and the
'receipt' aspect in src/test/postmaster). In other cases, the new value
for log_connections is a superset of the previous included aspects (e.g.
'all' in src/test/kerberos/t/001_auth.pl).
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/e1586594-3b69-4aea-87ce-73a7488cdc97%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code carelessly modified mtstate->ps.plan->targetlist,
which it's not supposed to do. Fortunately, there's not
really any need to do that because the planner already
set up a perfectly acceptable targetlist for the plan node.
We just need to remove the erroneous assignments and update some
relevant comments.
As it happens, the erroneous assignments caused the targetlist to
point to a different part of the source plan tree, so that there
isn't really a risk of the pointer becoming dangling after executor
termination. The only visible effect of this change we can find is
that EXPLAIN will show upper references to the ModifyTable's output
expressions using different variables. Formerly it showed Vars from
the first target relation that survived executor-startup pruning.
Now it always shows such references using the first relation appearing
in the planner output, independently of what happens during executor
pruning. On the whole that seems like a good thing.
Also make a small tweak in ExplainPreScanNode to ensure that the first
relation will receive a refname assignment in set_rtable_names, even
if it got pruned at startup. Previously the Vars might be shown
without any table qualification, which is confusing in a multi-table
query.
I considered back-patching this, but since the bug doesn't seem to
have any really terrible consequences in existing branches, it
seems better to not change their EXPLAIN output. It's not too late
for v18 though, especially since v18 already made other changes in
the EXPLAIN output for these cases.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/213261.1747611093@sss.pgh.pa.us
|