| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Some things in be-secure-openssl.c and fe-secure-openssl.c were not
actually specific to OpenSSL but could also be used by other
implementations. In order to avoid copy-and-pasting, move some of that
code to common files.
|
|
|
|
|
|
|
| |
Move the documentation of the SSL API calls are supposed to do into the
headers files, instead of keeping them in the files for the OpenSSL
implementation. That way, they don't have to be duplicated or be
inconsistent when other implementations are added.
|
|
|
|
|
| |
The EDH support is not really specific to the OpenSSL implementation, so
move the support and documentation comments to common files.
|
|
|
|
|
|
|
|
|
| |
Split the "Authentication and Security" section into two separate
sections "Authentication" and "SSL". The latter part has gotten much
longer over time, and doesn't primarily have to do with authentication.
Also, the row_security parameter was inconsistently categorized, so
clean that up while we're here.
|
|
|
|
|
| |
Several of the test suites under src/test/ were missing an installcheck
target.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch rearranges the division of labor between pg_dump and pg_dumpall
so that pg_dump itself handles all properties attached to a single
database. Notably, a database's ACL (GRANT/REVOKE status) and local GUC
settings established by ALTER DATABASE SET and ALTER ROLE IN DATABASE SET
can be dumped and restored by pg_dump. This is a long-requested
improvement.
"pg_dumpall -g" will now produce only role- and tablespace-related output,
nothing about individual databases. The total output of a regular
pg_dumpall run remains the same.
pg_dump (or pg_restore) will restore database-level properties only when
creating the target database with --create. This applies not only to
ACLs and GUCs but to the other database properties it already handled,
that is database comments and security labels. This is more consistent
and useful, but does represent an incompatibility in the behavior seen
without --create.
(This change makes the proposed patch to have pg_dump use "COMMENT ON
DATABASE CURRENT_DATABASE" unnecessary, since there is no case where
the command is issued that we won't know the true name of the database.
We might still want that patch as a feature in its own right, but pg_dump
no longer needs it.)
pg_dumpall with --clean will now drop and recreate the "postgres" and
"template1" databases in the target cluster, allowing their locale and
encoding settings to be changed if necessary, and providing a cleaner
way to set nondefault tablespaces for them than we had before. This
means that such a script must now always be started in the "postgres"
database; the order of drops and reconnects will not work otherwise.
Without --clean, the script will not adjust any database-level properties
of those two databases (including their comments, ACLs, and security
labels, which it formerly would try to set).
Another minor incompatibility is that the CREATE DATABASE commands in a
pg_dumpall script will now always specify locale and encoding settings.
Formerly those would be omitted if they matched the cluster's default.
While that behavior had some usefulness in some migration scenarios,
it also posed a significant hazard of unwanted locale/encoding changes.
To migrate to another locale/encoding, it's now necessary to use pg_dump
without --create to restore into a database with the desired settings.
Commit 4bd371f6f's hack to emit "SET default_transaction_read_only = off"
is gone: we now dodge that problem by the expedient of not issuing ALTER
DATABASE SET commands until after reconnecting to the target database.
Therefore, such settings won't apply during the restore session.
In passing, improve some shaky grammar in the docs, and add a note pointing
out that pg_dumpall's output can't be expected to load without any errors.
(Someday we might want to fix that, but this is not that patch.)
Haribabu Kommi, reviewed at various times by Andreas Karlsson,
Vaishnavi Prabakaran, and Robert Haas; further hacking by me.
Discussion: https://postgr.es/m/CAJrrPGcUurV0eWTeXODwsOYFN=Ekq36t1s0YnFYUNzsmRfdAyA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of the code in pg_dump dumps an object's comment, security label,
and ACL auxiliary TOC entries, in that order, immediately after the
object's main TOC entry, and at least dumpComment's API spec says this
isn't optional. dumpDatabase was significantly violating that when
in binary-upgrade mode, by inserting totally unrelated stuff between.
Also, dumpForeignDataWrapper and dumpForeignServer were being randomly
inconsistent. Reorder code so everybody does it the same.
This may be future-proofing us against some code growing a requirement for
such auxiliary entries to be adjacent to their main entry. But for now
it's just neatnik-ism, so I see no need for back-patch.
Discussion: https://postgr.es/m/21714.1516553459@sss.pgh.pa.us
|
|
|
|
|
|
| |
Commit 8561e4840c81f7e345be2df170839846814fa004 neglected to handle
older Python versions that don't support the "with" statement. So write
the tests in a way that older versions can handle as well.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_tocEntryRequired() expects that it can identify ACL, SECURITY LABEL,
and COMMENT TOC entries that are for large objects by seeing whether
the tag for them starts with "LARGE OBJECT ". While that works fine
for actual large objects, which are indeed tagged that way, it's
subject to false positives unless every such entry's tag starts with an
appropriate type ID. And in fact it does not work for ACLs, because
up to now we customarily tagged those entries with just the bare name
of the object. This means that an ACL for an object named
"LARGE OBJECT something" would be misclassified as data not schema,
with undesirable results in a schema-only or data-only dump ---
although pg_upgrade seems unaffected, due to the special case for
binary-upgrade mode further down in _tocEntryRequired().
We can fix this by changing all the dumpACL calls to use the label
strings already in use for comments and security labels, which do
follow the convention of starting with an object type indicator.
Well, mostly they follow it. dumpDatabase() got it wrong, using
just the bare database name for those purposes, so that a database
named "LARGE OBJECT something" would similarly be subject to having
its comment or security label dropped or included when not wanted.
Bring that into line too. (Note that up to now, database ACLs have
not been processed by pg_dump, so that this issue doesn't affect them.)
_tocEntryRequired() itself is not free of fault: it was overly liberal
about matching object tags to "LARGE OBJECT " in binary-upgrade mode.
This looks like it is probably harmless because there would be no data
component to strip anyway in that mode, but at best it's trouble
waiting to happen, so tighten that up too.
The possible misclassification of SECURITY LABEL entries for databases is
in principle a security problem, but the opportunities for actual exploits
seem too narrow to be interesting. The other cases seem like just bugs,
since an object owner can change its ACL or comment for himself, he needn't
try to trick someone else into doing it by choosing a strange name.
This has been broken since per-large-object TOC entries were introduced
in 9.0, so back-patch to all supported branches.
Discussion: https://postgr.es/m/21714.1516553459@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language. Add similar underlying functions to SPI. Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call. Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.
- SPI
Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags. The only option flag right now is
SPI_OPT_NONATOMIC. A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed. This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called. A nonatomic SPI connection uses different
memory management. A normal SPI connection allocates its memory in
TopTransactionContext. For nonatomic connections we use PortalContext
instead. As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.
SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.
- portalmem.c
Some adjustments were made in the code that cleans up portals at
transaction abort. The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.
In AtAbort_Portals(), remove the code that marks an active portal as
failed. As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort. And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception. So the code in AtAbort_Portals() is never used
anyway.
In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much. This mirrors similar code in
PreCommit_Portals().
- PL/Perl
Gets new functions spi_commit() and spi_rollback()
- PL/pgSQL
Gets new commands COMMIT and ROLLBACK.
Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.
- PL/Python
Gets new functions plpy.commit and plpy.rollback.
- PL/Tcl
Gets new commands commit and rollback.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for huge pages (called large pages on Windows) to the
Windows build.
This (probably) breaks compatibility with Windows versions prior to
Windows 2003 or Windows Vista.
Authors: Takayuki Tsunakawa and Thomas Munro
Reviewed by: Magnus Hagander, Amit Kapila
|
|
|
|
|
|
|
| |
The field is still called "hostaddr", so make sure references use
"hostaddr values" instead.
Author: Michael Paquier <michael.paquier@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The previous code converted SPI_processed to a Python float if it didn't
fit into a Python int. But Python longs have unlimited precision, so
use that instead in all cases.
As in eee50a8d4c389171ad5180568a7221f7e9b28f09, we use the Python
LongLong API unconditionally for simplicity.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
|
|
|
|
|
|
| |
Apparently, Peter's compiler has faith that the switch test values here
could never not be valid values of their enums. Mine does not, and
I tend to agree with it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't actually need two code paths, one for 32 bits and one for 64
bits. Since the existing code already assumed that "long long" is
available, we can just use PyLong_FromLongLong() for 64 bits as well.
In Python 2.5 and later, PyLong_FromLong() and PyLong_FromLongLong() use
the same code, so there will be no difference for 64-bit platforms. In
Python 2.4, the code is different, but performance testing showed no
noticeable difference in PL/Python, and that Python version is ancient
anyway.
Discussion: https://www.postgresql.org/message-id/0a02203c-e157-55b2-464e-6087066a1849@2ndquadrant.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an UPDATE causes a row to no longer match the partition
constraint, try to move it to a different partition where it does
match the partition constraint. In essence, the UPDATE is split into
a DELETE from the old partition and an INSERT into the new one. This
can lead to surprising behavior in concurrency scenarios because
EvalPlanQual rechecks won't work as they normally did; the known
problems are documented. (There is a pending patch to improve the
situation further, but it needs more review.)
Amit Khandekar, reviewed and tested by Amit Langote, David Rowley,
Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro
Herrera, Amit Kapila, and me. A few final revisions by me.
Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
|
|
|
|
|
|
|
|
|
| |
When an index column is an expression, it makes no sense to compare its
attribute numbers.
This seems to account for remaining buildfarm fallout from 8b08f7d4820f.
At least, it solves the issue in my local 32bit VM -- let's see what the
rest thinks.
|
|
|
|
|
|
|
|
|
| |
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that. It's only used in
aclcheck_error. By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There used to be a lot of different *Type and *Kind symbol groups to
address objects within different commands, most of which have been
replaced by ObjectType, starting with
b256f2426433c56b4bea3a8102757749885b81ba. But this conversion was never
done for the ACL commands until now.
This change ends up being just a plain replacement of the types and
symbols, without any code restructuring needed, except deleting some now
redundant code.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Reviewed-by: Stephen Frost <sfrost@snowman.net>
|
|
|
|
|
|
| |
I missed a '0' in the version number string ...
Per buildfarm member crake.
|
|
|
|
| |
Per buildfarm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.
As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones. Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.
To support pg_dump'ing these indexes, add commands
CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index). These reconstruct prior
database state exactly.
Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/20171113170646.gzweigyrgg6pwsg4@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For no apparent reason, this function was using a 16bit-wide inhseqno
value, rather than the correct 32 bit width which is what is stored in
the pg_inherits catalog. This becomes evident if you try to create a
table with more than 65535 parents, because this error appears:
ERROR: duplicate key value violates unique constraint «pg_inherits_relid_seqno_index»
DETAIL: Key (inhrelid, inhseqno)=(329371, 0) already exists.
Needless to say, having so many parents is an uncommon situations, which
explains why this error has never been reported despite being having
been introduced with the Postgres95 1.01 sources in commit d31084e9d111:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/creatinh.c;hb=d31084e9d111#l349
Backpatch all the way back.
David Rowley noticed this while reviewing a patch of mine.
Discussion: https://postgr.es/m/CAKJS1f8Dn7swSEhOWwzZzssW7747YB=2Hi+T7uGud40dur69-g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow the pending patch for parallel CREATE INDEX to work
on system catalogs, and to provide the same level of protection
against use of user indexes while they are being rebuilt that we
have for non-parallel CREATE INDEX.
Patch by me, reviewed by Peter Geoghegan.
Discussion: http://postgr.es/m/CA+TgmoYN-YQU9JsGQcqFLovZ-C+Xgp1_xhJQad=cunGG-_p5gg@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wzkv4UNkXYhqQRqk-u9rS7h5c-4cCW+EqQ8K_WSeS43aZg@mail.gmail.com
|
|
|
|
|
|
| |
Typo in 9c7d06d60680c7f00d931233873dee81fdb311c6
Reported-by: Masahiko Sawada
|
|
|
|
|
|
| |
The "callback" that this comment was referring to was removed by commit
c0a15e07cd718cb6e455e68328f522ac076a0e4b, so update to match the current
code.
|
| |
|
|
|
|
|
|
| |
This was hardly tested at all. The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
|
|
|
|
|
|
|
| |
Reorder header files in joinrels.c and pathnode.c in alphabetical order,
removing unnecessary ones.
Author: Etsuro Fujita
|
|
|
|
|
|
|
|
|
|
|
| |
node->partitioned_rels is only set in UPDATE/DELETE cases, but
ExecInitModifyTable only uses its "rel" variable in INSERT cases,
so the extra logic to find the root rel is just a waste of complexity
and cycles.
Etsuro Fujita, reviewed by Amit Langote
Discussion: https://postgr.es/m/93cf9816-2f7d-0f67-8ed2-4a4e497a6ab8@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ability to advance both physical and logical replication slots using a
new user function pg_replication_slot_advance().
For logical advance that means records are consumed as fast as possible
and changes are not given to output plugin for sending. Makes 2nd phase
(after we reached SNAPBUILD_FULL_SNAPSHOT) of replication slot creation
faster, especially when there are big transactions as the reorder buffer
does not have to deal with data changes and does not have to spill to
disk.
Author: Petr Jelinek
Reviewed-by: Simon Riggs
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The creates a single function JsonEncodeDateTime which will format these
data types in an efficient and consistent manner. This will be all the
more important when we come to jsonpath so we don't have to implement yet
more code doing the same thing in two more places.
This also extends the code to handle time and timetz types which were
not previously handled specially. This requires exposing the time2tm and
timetz2tm functions.
Patch from Nikita Glukhov
|
|
|
|
|
|
|
|
|
|
| |
In this case, the macros SET_8_BYTES(), GET_8_BYTES(), SET_4_BYTES(),
GET_4_BYTES() are no-ops, so we can just remove them.
The plan is to perhaps remove them from the source code altogether, so
we'll start here.
Discussion: https://www.postgresql.org/message-id/5d51721a-69ef-2053-9172-599b539f0628@2ndquadrant.com
|
|
|
|
| |
Patch by: "Rader, David" <davidr@openscg.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a query against an inheritance tree runs concurrently with an ALTER
TABLE that's disinheriting one of the tree members, it's possible to get
a "could not find inherited attribute" error because after obtaining lock
on the removed member, make_inh_translation_list sees that its columns
have attinhcount=0 and decides they aren't the columns it's looking for.
An ideal fix, perhaps, would avoid including such a just-removed member
table in the query at all; but there seems no way to accomplish that
without adding expensive catalog rechecks or creating a likelihood of
deadlocks. Instead, let's just drop the check on attinhcount. In this
way, a query that's included a just-disinherited child will still
succeed, which is not a completely unreasonable behavior.
This problem has existed for a long time, so back-patch to all supported
branches. Also add an isolation test verifying related behaviors.
Patch by me; the new isolation test is based on Kyotaro Horiguchi's work.
Discussion: https://postgr.es/m/20170626.174612.23936762.horiguchi.kyotaro@lab.ntt.co.jp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we flatten a subquery whose target list contains constants or
expressions, when those output columns are used in GROUPING SET columns,
the planner was capable of doing the wrong thing by merging a pulled-up
expression into the surrounding expression during const-simplification.
Then the late processing that attempts to match subexpressions to grouping
sets would fail to match those subexpressions to grouping sets, with the
effect that they'd not go to null when expected.
To fix, wrap such subquery outputs in PlaceHolderVars, ensuring that
they preserve their separate identity throughout the planner's expression
processing. This is a bit of a band-aid, because the wrapper defeats
const-simplification even in places where it would be safe to allow.
But a nicer fix would likely be too invasive to back-patch, and the
consequences of the missed optimizations probably aren't large in most
cases.
Back-patch to 9.5 where grouping sets were introduced.
Heikki Linnakangas, with small mods and better test cases by me;
additional review by Andrew Gierth
Discussion: https://postgr.es/m/7dbdcf5c-b5a6-ef89-4958-da212fe10176@iki.fi
|
|
|
|
| |
Patch by Ashutosh Sharma <ashu.coek88@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the ability to label a column's default value in the catalog header,
and implement this for pg_attribute. A new function in Catalog.pm is
used to fill in a tuple with defaults. The build process will complain
loudly if a catalog entry is incomplete,
Commit 8137f2c3232 labeled variable length columns for the C preprocessor.
Expose that label to genbki.pl so we can exclude those columns from schema
macros in a general fashion. Also, format schema macro entries according
to their types.
This means slightly less code maintenance, but more importantly it's a
proving ground for mechanisms intended to be used in later commits.
While at it, I (Álvaro) couldn't resist making some changes in
genbki.pl: rename some functions to actually indicate their purpose
instead of actively misleading onlookers; and don't iterate on the whole
of pg_type to find the entry for each catalog row, using a hash instead
of an array.
Author: John Naylor, some changes by Álvaro Herrera
Discussion: https://postgr.es/m/CAJVSVGVJHwD8sfDfZW9TbCHWKf=C1YDRM-rF=2JenRU_y+VcFg@mail.gmail.com
|
|
|
|
|
|
|
|
| |
Reported-by: Christoph Dreis
Discussion: https://postgr.es/m/007e01d3519e$2734ca10$759e5e30$@freenet.de
Author: Christoph Dreis
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was nearly the same code. Extend wait_for_catchup to allow waiting
for pg_current_wal_lsn() and use that in the subscription tests. Also
change one use in the pg_rewind tests to use this.
Also remove some broken code in wait_for_catchup and
wait_for_slot_catchup. The error message in case the waiting failed
wanted to show the current LSN, but the way it was written never
worked. So since nobody ever cared, just remove it.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should have been done in commit 18ce3a4ab, which added that parameter
to ExplainOneQuery, but it was overlooked. This makes it impossible for
a user of the hook to pass the queryEnv down to ExplainOnePlan.
It's too late to change this API in v10, I suppose, but fortunately
passing NULL to ExplainOnePlan will work in nearly all interesting
cases in v10. That might not be true forever, so we'd better fix it.
Tatsuro Yamada, reviewed by Thomas Munro
Discussion: https://postgr.es/m/890e8dd9-c1c7-a422-6892-874f5eaee048@lab.ntt.co.jp
|
|
|
|
| |
"c.f." should be "cf.".
|
|
|
|
|
|
|
|
| |
PL/pgSQL "pins" internally generated portals so that user code cannot
close them by guessing their names. Add this functionality to PL/Perl
and PL/Python as well, preventing users from manually closing cursors
created by spi_query and plpy.cursor, respectively. (PL/Tcl does not
currently offer any cursor functionality.)
|
|
|
|
|
|
| |
Existing tests only covered returning explicitly named portals as
refcursor. The unnamed cursor case was recently broken without a test
failing.
|
|
|
|
|
|
|
| |
This reverts commit b3617cdfbba1b5381e9d1c6bc0839500e8eb7273.
This broke returning unnamed cursors from PL/pgSQL functions.
Apparently, there are no test cases for this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems incorrect to assume that the list of CkptSortItems can never
contain duplicate page numbers: concurrent activity could result in some
page getting dropped from a low-numbered buffer and later loaded into a
high-numbered buffer while BufferSync is scanning the buffer pool.
If that happened, the comparator would give self-inconsistent results,
potentially confusing qsort(). Saving one comparison step is not worth
possibly getting the sort wrong.
So far as I can tell, nothing would actually go wrong given our current
implementation of qsort(). It might get a bit slower than expected
if there were a large number of duplicates of one value, but that's
surely a probability-epsilon case. Still, the comment is wrong,
and if we ever switched to another sort implementation it might be
less forgiving.
In passing, avoid casting away const-ness of the argument pointers;
I've not seen any compiler complaints from that, but it seems likely
that some compilers would not like it.
Back-patch to 9.6 where this code came in, just in case I've underestimated
the possible consequences.
Discussion: https://postgr.es/m/18437.1515607610@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
| |
Without this, Parallel Append can end up with extra children.
Report by Rajkumar Raghuwanshi. Fix by Amit Khandekar. Brown
paper bag bug by me.
Discussion: http://postgr.es/m/CAKcux6mBF-NiddyEe9LwymoUC5+wh8bQJ=uk2gGkOE+L8cv=LA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PL/pgSQL "pins" internally generated (unnamed) portals so that user code
cannot close them by guessing their names. This logic is also useful in
other languages and really for any code. So move that logic into SPI.
An unnamed portal obtained through SPI_cursor_open() and related
functions is now automatically pinned, and SPI_cursor_close()
automatically unpins a portal that is pinned.
In the core distribution, this affects PL/Perl and PL/Python, preventing
users from manually closing cursors created by spi_query and
plpy.cursor, respectively. (PL/Tcl does not currently offer any cursor
functionality.)
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
|
|
|
|
|
|
| |
The previous code gave the same error message for attempting to drop
pinned and active portals, but those are separate states, so give
separate error messages.
|