aboutsummaryrefslogtreecommitdiff
path: root/doc/src
Commit message (Collapse)AuthorAge
...
* Restrict copying of invalidated replication slots.Masahiko Sawada2025-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, invalidated logical and physical replication slots could be copied using the pg_copy_logical_replication_slot and pg_copy_physical_replication_slot functions. Replication slots that were invalidated for reasons other than WAL removal retained their restart_lsn. This meant that a new slot copied from an invalidated slot could have a restart_lsn pointing to a WAL segment that might have already been removed. This commit restricts the copying of invalidated replication slots. Backpatch to v16, where slots could retain their restart_lsn when invalidated for reasons other than WAL removal. For v15 and earlier, this check is not required since slots can only be invalidated due to WAL removal, and existing checks already handle this issue. Author: Shlok Kyal <shlok.kyal.oss@gmail.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Discussion: https://postgr.es/m/CANhcyEU65aH0VYnLiu%3DOhNNxhnhNhwcXBeT-jvRe1OiJTo_Ayg%40mail.gmail.com Backpatch-through: 16
* libpq: Add support for dumping SSL key material to fileDaniel Gustafsson2025-04-03
| | | | | | | | | | | | This adds a new connection parameter which instructs libpq to write out keymaterial clientside into a file in order to make connection debugging with Wireshark and similar tools possible. The file format used is the standardized NSS format. Author: Abhishek Chanda <abhishek.becs@gmail.com> Co-authored-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Discussion: https://postgr.es/m/CAKiP-K85C8uQbzXKWf5wHQPkuygGUGcufke713iHmYWOe9q2dA@mail.gmail.com
* Add support for sorted gist index builds to btree_gistHeikki Linnakangas2025-04-03
| | | | | | | | | | | | | | | | | | | | | This enables sortsupport in the btree_gist extension for faster builds of gist indexes. Sorted gist index build strategy is the new default now. Regression tests are unchanged (except for one small change in the 'enum' test to add coverage for enum values added later) and are using the sorted build strategy instead. One version of this was committed a long time ago already, in commit 9f984ba6d2, but it was quickly reverted because of buildfarm failures. The failures were presumably caused by some small bugs, but we never got around to debug and commit it again. This patch was written from scratch, implementing the same idea, with some fragments and ideas from the original patch. Author: Bernd Helmle <mailings@oopsware.de> Author: Andrey Borodin <x4mmm@yandex-team.ru> Discussion: https://www.postgresql.org/message-id/64d324ce2a6d535d3f0f3baeeea7b25beff82ce4.camel@oopsware.de
* Doc: Improve -R option added in e5aeed4b80.Amit Kapila2025-04-03
| | | | | | Author: Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Discussion: https://postgr.es/m/CAHut+PvJPnaL=70SbBe3fYg2nq74Z=Yv4X=zRpUWYfOi-q6=2w@mail.gmail.com
* Fix slot synchronization for two_phase enabled slots.Amit Kapila2025-04-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The issue is that the transactions prepared before two-phase decoding is enabled can fail to replicate to the subscriber after being committed on a promoted standby following a failover. This is because the two_phase_at field of a slot, which tracks the LSN from which two-phase decoding starts, is not synchronized to standby servers. Without two_phase_at, the logical decoding might incorrectly identify prepared transaction as already replicated to the subscriber after promotion of standby server, causing them to be skipped. To address the issue on HEAD, the two_phase_at field of the slot is exposed by the pg_replication_slots view and allows the slot synchronization to copy this value to the corresponding synced slot on the standby server. This bug is likely to occur if the user toggles the two_phase option to true after initial slot creation. Given that altering the two_phase option of a replication slot is not allowed in PostgreSQL 17, this bug is less likely to occur. We can't change the view/function definition in backbranch so we can't push the same fix but we are brainstorming an appropriate solution for PG17. Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/TYAPR01MB5724CC7C288535BBCEEE65DA94A72@TYAPR01MB5724.jpnprd01.prod.outlook.com
* Change SQL-language functions to use the plan cache.Tom Lane2025-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the historical implementation of SQL functions (if they don't get inlined), we built plans for all the contained queries at first call within an outer query, and then re-used those plans for the duration of the outer query, and then forgot everything. This was not ideal, not least because the plans could not be customized to specific values of the function's parameters. Our plancache infrastructure seems mature enough to be used here. That will solve both the problem with not being able to build custom plans and the problem with not being able to share work across successive outer queries. Aside from those performance concerns, this change fixes a longstanding bugaboo with SQL functions: you could not write DDL that would affect later statements in the same function. That's mostly still true with new-style SQL functions, since the results of parse analysis are baked into the stored query trees (and protected by dependency records). But for old-style SQL functions, it will now work much as it does with PL/pgSQL functions, because we delay parse analysis and planning of each query until we're ready to run it. Some edge cases that require replanning are now handled better too; see for example the new rowsecurity test, where we now detect an RLS context change that was previously missed. One other edge-case change that might be worthy of a release note is that we now insist that a SQL function's result be generated by the physically-last query within it. Previously, if the last original query was deleted by a DO INSTEAD NOTHING rule, we'd be willing to take the result from the preceding query instead. This behavior was undocumented except in source-code comments, and it seems hard to believe that anyone's relying on it. Along the way to this feature, we needed a few infrastructure changes: * The plancache can now take either a raw parse tree or an analyzed-but-not-rewritten Query as the starting point for a CachedPlanSource. If given a Query, it is caller's responsibility that nothing will happen to invalidate that form of the query. We use this for new-style SQL functions, where what's in pg_proc is serialized Query(s) and we trust the dependency mechanism to disallow DDL that would break those. * The plancache now offers a way to invoke a post-rewrite callback to examine/modify the rewritten parse tree when it is rebuilding the parse trees after a cache invalidation. We need this because SQL functions sometimes adjust the parse tree to make its output exactly match the declared result type; if the plan gets rebuilt, that has to be re-done. * There is a new backend module utils/cache/funccache.c that abstracts the idea of caching data about a specific function usage (a particular function and set of input data types). The code in it is moved almost verbatim from PL/pgSQL, which has done that for a long time. We use that logic now for SQL-language functions too, and maybe other PLs will have use for it in the future. Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://postgr.es/m/8216639.NyiUUSuA9g@aivenlaptop
* docs: Fix column count attribute in tableHeikki Linnakangas2025-04-02
| | | | | | | Nothing seems to actually depend on the attribute, as the docs built successfully, but let's be tidy. Reported offlist by Matthias van de Meent
* docs: Add a new section and a table listing protocol versionsHeikki Linnakangas2025-04-02
| | | | | | | | | Move the discussion on protocol versions and version negotiation to a new "Protocol versions" section. Add a table listing all the different protocol versions, starting from the obsolete protocol version 2, and the PostgreSQL versions that support each. Discussion: https://www.postgresql.org/message-id/69f53970-1d55-4165-9151-6fb524e36af9@iki.fi
* Make cancel request keys longerHeikki Linnakangas2025-04-02
| | | | | | | | | | | | | | | | | | | | | | | Currently, the cancel request key is a 32-bit token, which isn't very much entropy. If you want to cancel another session's query, you can brute-force it. In most environments, an unauthorized cancellation of a query isn't very serious, but it nevertheless would be nice to have more protection from it. Hence make the key longer, to make it harder to guess. The longer cancellation keys are generated when using the new protocol version 3.2. For connections using version 3.0, short 4-bytes keys are still used. The new longer key length is not hardcoded in the protocol anymore, the client is expected to deal with variable length keys, up to 256 bytes. This flexibility allows e.g. a connection pooler to add more information to the cancel key, which might be useful for finding the connection. Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl> Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Discussion: https://www.postgresql.org/message-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi
* libpq: Add min/max_protocol_version connection optionsHeikki Linnakangas2025-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | All supported version of the PostgreSQL server send the NegotiateProtocolVersion message when an unsupported minor protocol version is requested by a client. But many other applications that implement the PostgreSQL protocol (connection poolers, or other databases) do not, and the same is true for PostgreSQL server versions older than 9.3. Connecting to such other applications thus fails if a client requests a protocol version different than 3.0. This patch adds a max_protocol_version connection option to libpq that specifies the protocol version that libpq should request from the server. Currently only 3.0 is supported, but that will change in a future commit that bumps the protocol version. Even after that version bump the default will likely stay 3.0 for the time being. Once more of the ecosystem supports the NegotiateProtocolVersion message we might want to change the default to the latest minor version. This also adds the similar min_protocol_version connection option, to allow the client to specify that connecting should fail if a lower protocol version is attempted by the server. This can be used to ensure that certain protocol features are used, which can be particularly useful if those features impact security. Author: Jelte Fennema-Nio <postgres@jeltef.nl> Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions) Discussion: https://www.postgresql.org/message-id/CAGECzQTfc_O%2BHXqAo5_-xG4r3EFVsTefUeQzSvhEyyLDba-O9w@mail.gmail.com Discussion: https://www.postgresql.org/message-id/CAGECzQRbAGqJnnJJxTdKewTsNOovUt4bsx3NFfofz3m2j-t7tA@mail.gmail.com
* docs: Update phrase on message lengths in the protocolHeikki Linnakangas2025-04-02
| | | | | | | | | | | The reasoning for why all the message formats are parseable without the explicit message length field is anachronistic; the real reason is that protocol version 2 did not have a message length field. There's nothing wrong with relying on the message length, like we do in the CopyData messags, even though it often still makes sense to have length fields for individual parts in messages. Discussion: https://www.postgresql.org/message-id/02a4eed2-98f0-4796-9d4f-12128ff44fe0@iki.fi
* Add support for NOT ENFORCED in foreign key constraintsPeter Eisentraut2025-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This expands the NOT ENFORCED constraint flag, previously only supported for CHECK constraints (commit ca87c415e2f), to foreign key constraints. Normally, when a foreign key constraint is created on a table, action and check triggers are added to maintain data integrity. With this patch, if a constraint is marked as NOT ENFORCED, integrity checks are no longer required, making these triggers unnecessary. Consequently, when creating a NOT ENFORCED foreign key constraint, triggers will not be created, and the constraint will be marked as NOT VALID. Similarly, if an existing foreign key constraint is changed to NOT ENFORCED, the associated triggers will be dropped, and the constraint will also be marked as NOT VALID. Conversely, if a NOT ENFORCED foreign key constraint is changed to ENFORCED, the necessary triggers will be created, and the will be changed to VALID by performing necessary validation. Since not-enforced foreign key constraints have no triggers, the shortcut used for example in psql and pg_dump to skip looking for foreign keys if the relation is known not to have triggers no longer applies. (It already didn't work for partitioned tables.) Author: Amul Sul <sulamul@gmail.com> Reviewed-by: Joel Jacobson <joel@compiler.org> Reviewed-by: Andrew Dunstan <andrew@dunslane.net> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: jian he <jian.universality@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Isaac Morland <isaac.morland@gmail.com> Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com> Tested-by: Triveni N <triveni.n@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b962c5AcYW9KUt_R_ER5qs3fUGbe4az-SP-vuwPS-w-AGA@mail.gmail.com
* Improve error message when standby does accept connections.Fujii Masao2025-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even after reaching the minimum recovery point, if there are long-lived write transactions with 64 subtransactions on the primary, the recovery snapshot may not yet be ready for hot standby, delaying read-only connections on the standby. Previously, when read-only connections were not accepted due to this condition, the following error message was logged: FATAL: the database system is not yet accepting connections DETAIL: Consistent recovery state has not been yet reached. This DETAIL message was misleading because the following message was already logged in this case: LOG: consistent recovery state reached This contradiction, i.e., indicating that the recovery state was consistent while also stating it wasn’t, caused confusion. This commit improves the error message to better reflect the actual state: FATAL: the database system is not yet accepting connections DETAIL: Recovery snapshot is not yet ready for hot standby. HINT: To enable hot standby, close write transactions with more than 64 subtransactions on the primary server. To implement this, the commit introduces a new postmaster signal, PMSIGNAL_RECOVERY_CONSISTENT. When the startup process reaches a consistent recovery state, it sends this signal to the postmaster, allowing it to correctly recognize that state. Since this is not a clear bug, the change is applied only to the master branch and is not back-patched. Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp> Discussion: https://postgr.es/m/02db8cd8e1f527a8b999b94a4bee3165@oss.nttdata.com
* Doc: add information about partition lockingDavid Rowley2025-04-02
| | | | | | | | | | The documentation around locking of partitions for the executor startup phase of run-time partition pruning wasn't clear about which partitions were being locked. Fix that. Reviewed-by: Tender Wang <tndrwang@gmail.com> Discussion: https://postgr.es/m/CAApHDvp738G75HfkKcfXaf3a8s%3D6mmtOLh46tMD0D2hAo1UCzA%40mail.gmail.com Backpatch-through: 13
* Introduce a SQL-callable function array_sort(anyarray).Tom Lane2025-04-01
| | | | | | | | | | | | | | | | Create a function that will sort the elements of an array according to the element type's sort order. If the array has more than one dimension, the sub-arrays of the first dimension are sorted per normal array-comparison rules, leaving their contents alone. In support of this, add pg_type.typarray to the set of fields cached by the typcache. Author: Junwang Zhao <zhjwpku@gmail.com> Co-authored-by: Jian He <jian.universality@gmail.com> Reviewed-by: Aleksander Alekseev <aleksander@timescale.com> Discussion: https://postgr.es/m/CAEG8a3J41a4dpw_-F94fF-JPRXYxw-GfsgoGotKcjs9LVfEEvw@mail.gmail.com
* doc: Adjust some notes about pg_upgrade's file transfer modes.Nathan Bossart2025-04-01
| | | | | | | | | --copy-file-range and --swap were not mentioned in a few places that discuss the available file transfer modes. This entire page would likely benefit from an overhaul, but that's v19 material at this point. Oversights in commits d93627bcbe and 626d7236b6.
* aio: Add pg_aios viewAndres Freund2025-04-01
| | | | | | | | | | The new view lists all IO handles that are currently in use and is mainly useful for PG developers, but may also be useful when tuning PG. Bumps catversion. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
* docs: Add acronym and glossary entries for I/O and AIOAndres Freund2025-04-01
| | | | | | | | | These are fairly basic, but better than nothing. While there are several opportunities to link to these entries, this patch does not add any. They will however be referenced by future patches. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/20250326183102.92.nmisch@google.com
* doc: Mention clock synchronization recommendation for hot_standby_feedbackPeter Eisentraut2025-03-31
| | | | | | | | | | | hot_standby_feedback mechanics assume that clocks are synchronized, but it was not clear from documentation. Author: Jakub Wartak <jakub.wartak@enterprisedb.com> Reviewed-by: Euler Taveira <euler@eulerto.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Discussion: https://postgr.es/m/CAKZiRmwBcALLrDgCyEhHP1enUxtPMjyNM_d1A2Lng3_6Rf4Qfw%40mail.gmail.com
* Enable IO concurrency on all systemsAndres Freund2025-03-30
| | | | | | | | | | | | | | | Previously effective_io_concurrency and maintenance_io_concurrency could not be set above 0 on machines without fadvise support. AIO enables IO concurrency without such support, via io_method=worker. Currently only subsystems using the read stream API will take advantage of this. Other users of maintenance_io_concurrency (like recovery prefetching) which leverage OS advice directly will not benefit from this change. In those cases, maintenance_io_concurrency will have no effect on I/O behavior. Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/CAAKRu_atGgZePo=_g6T3cNtfMf0QxpvoUh5OUqa_cnPdhLd=gw@mail.gmail.com
* docs: Reframe track_io_timing related docs as wait timeAndres Freund2025-03-30
| | | | | | | | | | | | | With AIO it does not make sense anymore to track the time for each individual IO, as multiple IOs can be in-flight at the same time. Instead we now track the time spent *waiting* for IOs. This should be reflected in the docs. While, so far, we only do a subset of reads, and no other operations, via AIO, describing the GUC and view columns as measuring IO waits is accurate for synchronous and asynchronous IO. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/5dzyoduxlvfg55oqtjyjehez5uoq6hnwgzor4kkybkfdgkj7ag@rbi4gsmzaczk
* amcheck: Add gin_index_check() to verify GIN indexTomas Vondra2025-03-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds a new function, validating two kinds of invariants on a GIN index: - parent-child consistency: Paths in a GIN graph have to contain consistent keys. Tuples on parent pages consistently include tuples from child pages; parent tuples do not require any adjustments. - balanced-tree / graph: Each internal page has at least one downlink, and can reference either only leaf pages or only internal pages. The GIN verification is based on work by Grigory Kryachko, reworked by Heikki Linnakangas and with various improvements by Andrey Borodin. Investigation and fixes for multiple bugs by Kirill Reshke. Author: Grigory Kryachko <GSKryachko@gmail.com> Author: Heikki Linnakangas <hlinnaka@iki.fi> Author: Andrey Borodin <amborodin@acm.org> Reviewed-By: José Villanova <jose.arthur@gmail.com> Reviewed-By: Aleksander Alekseev <aleksander@timescale.com> Reviewed-By: Nikolay Samokhvalov <samokhvalov@gmail.com> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Tomas Vondra <tomas.vondra@enterprisedb.com> Reviewed-By: Kirill Reshke <reshkekirill@gmail.com> Reviewed-By: Mark Dilger <mark.dilger@enterprisedb.com> Reviewed-By: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/45AC9B0A-2B45-40EE-B08F-BDCF5739D1E1%40yandex-team.ru
* pg_createsubscriber: Add '--all' option.Amit Kapila2025-03-28
| | | | | | | | | | | | | | | | | | | | | | | | The '--all' option indicates that the tool queries the source server (publisher) for all databases and creates subscriptions on the target server (subscriber) for databases with matching names. Without this user needs to explicitly specify all databases by using -d option for each database. This simplifies converting a physical standby to a logical subscriber, particularly during upgrades. The options '--database', '--publication', '--subscription', and '--replication-slot' cannot be used when '--all' is specified. Author: Shubham Khanna <khannashubham1197@gmail.com> Reviewed-by: vignesh C <vignesh21@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Euler Taveira <euler@eulerto.com> Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com> Discussion: https://postgr.es/m/CAHv8RjKhA=_h5vAbozzJ1Opnv=KXYQHQ-fJyaMfqfRqPpnC2bA@mail.gmail.com
* Remove the query_id_squash_values GUCÁlvaro Herrera2025-03-27
| | | | | | | | | | | | | Commit 62d712ecfd94 introduced the capability to calculate the same queryId for queries with different lengths of constants in a list for an IN clause. This behavior was originally enabled with a GUC query_id_squash_values. After a discussion about the value of such a GUC, it was decided to back out of the use of a GUC and make the squashing behavior the only available option. Author: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/Z-LZyygkkNyA8-kR@msg.df7cb.de Discussion: https://postgr.es/m/CA+q6zcVTK-3C-8NWV1oY2NZrvtnMCDqnyYYyk1T7WMUG65MeOQ@mail.gmail.com
* Simplify syntax for ALTER TABLE ALTER CONSTRAINT NO INHERITÁlvaro Herrera2025-03-27
| | | | | | | | | | | | | | | Commit d45597f72fe5 introduced the ability to change a not-null constraint from NO INHERIT to INHERIT and vice versa, but we included the SET noise word in the syntax for it. The SET turns out not to be necessary and goes against what the SQL standard says for other ALTER TABLE subcommands, so remove it. This changes the way this command is processed for constraint types other than not-null, so there are some error message changes. Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com> Discussion: https://postgr.es/m/202503251602.vsxaehsyaoac@alvherre.pgsql
* doc: Correct description of values used in FSM for indexesMichael Paquier2025-03-27
| | | | | | | | | | | The implementation of FSM for indexes is simpler than heap, where 0 is used to track if a page is in-use and (BLCKSZ - 1) if a page is free. One comment in indexfsm.c and one description in the documentation of pg_freespacemap were incorrect about that. Author: Alex Friedman <alexf01@gmail.com> Discussion: https://postgr.es/m/71eef655-c192-453f-ac45-2772fec2cb04@gmail.com Backpatch-through: 13
* aio: Add io_method=io_uringAndres Freund2025-03-26
| | | | | | | | | | | | | | | | | | | | | | Performing AIO using io_uring can be considerably faster than io_method=worker, particularly when lots of small IOs are issued, as a) the context-switch overhead for worker based AIO becomes more significant b) the number of IO workers can become limiting io_uring, however, is linux specific and requires an additional compile-time dependency (liburing). This implementation is fairly simple and there are substantial optimization opportunities. The description of the existing AIO_IO_COMPLETION wait event is updated to make the difference between it and the new AIO_IO_URING_EXECUTION clearer. Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt Discussion: https://postgr.es/m/20210223100344.llw5an2aklengrmn@alap3.anarazel.de Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
* aio: Add liburing dependencyAndres Freund2025-03-26
| | | | | | | | Will be used in a subsequent commit, to implement io_method=io_uring. Kept separate for easier review. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
* doc: Mention possible ephemeral discrepancies in pg_stat_activityMichael Paquier2025-03-27
| | | | | | | | | | | | Ephemeral inconsistencies across multiple attributes of pg_stat_activity can exist as the system is designed to be efficient with a low overhead. This question is raised by users from time to time based on the data read in the view, so let's add a note in the docs about this possibility. Author: Alex Friedman <alexf01@gmail.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/8a275154-a654-44b0-ab37-197802f04c7b@gmail.com
* Fix oversights in commit 8d5ceb113e3f7ddb627bd40b26438a9d2fa05512Robert Haas2025-03-26
| | | | | | | | | It added bogus whitespace at the end of a line in the documentation. It should not have done that. The pg_overexplain tests must SET debug_parallel_query = false, not just RESET debug_parallel_query, or we get failures on test machines that make debug_parallel_query = true the defualt.
* pg_overexplain: Additional EXPLAIN options for debugging.Robert Haas2025-03-26
| | | | | | | | | | | | | | | | | | | | There's a fair amount of information in the Plan and PlanState trees that isn't printed by any existing EXPLAIN option. This means that, when working on the planner, it's often necessary to rely on facilities such as debug_print_plan, which produce excessively voluminous output. Hence, use the new EXPLAIN extension facilities to implement EXPLAIN (DEBUG) and EXPLAIN (RANGE_TABLE) as extensions to the core EXPLAIN facility. A great deal more could be done here, and the specific choices about what to print and how are definitely arguable, but this is at least a starting point for discussion and a jumping-off point for possible future improvements. Reviewed-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviweed-by: Andrei Lepikhov <lepihov@gmail.com> (who didn't like it) Discussion: http://postgr.es/m/CA+TgmoZfvQUBWQ2P8iO30jywhfEAKyNzMZSR+uc2xr9PZBw6eQ@mail.gmail.com
* Introduce PG_MODULE_MAGIC_EXT macro.Tom Lane2025-03-26
| | | | | | | | | | | | | | | | | | This macro allows dynamically loaded shared libraries (modules) to provide a wired-in module name and version, and possibly other compile-time-constant fields in future. This information can be retrieved with the new pg_get_loaded_modules() function. This feature is expected to be particularly useful for modules that do not have any exposed SQL functionality and thus are not associated with a SQL-level extension object. But even for modules that do belong to extensions, being able to verify the actual code version can be useful. Author: Andrei Lepikhov <lepihov@gmail.com> Reviewed-by: Yurii Rashkovskii <yrashk@omnigres.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/dd4d1b59-d0fe-49d5-b28f-1e463b68fa32@gmail.com
* dblink: SCRAM authentication pass-throughPeter Eisentraut2025-03-26
| | | | | | | | | | | | | | This enables SCRAM authentication for dblink (using dblink_fdw) when connecting to a foreign server without having to store a plain-text password on user mapping options This uses the same approach as it was implemented for postgres_fdw in commit 761c79508e7. (It also contains the equivalent of the subsequent fixes 76563f88cfb and d2028e9bbc1.) Author: Matheus Alcantara <mths.dev@pm.me> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/CAFY6G8ercA1KES%3DE_0__R9QCTR805TTyYr1No8qF8ZxmMg8z2Q%40mail.gmail.com
* Add support for gamma() and lgamma() functions.Dean Rasheed2025-03-26
| | | | | | | | | | | | | | These are useful general-purpose math functions which are included in POSIX and C99, and are commonly included in other math libraries, so expose them as SQL-callable functions. Author: Dean Rasheed <dean.a.rasheed@gmail.com> Reviewed-by: Stepan Neretin <sncfmgg@gmail.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru> Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com> Discussion: https://postgr.es/m/CAEZATCXpGyfjXCirFk9au+FvM0y2Ah+2-0WSJx7MO368ysNUPA@mail.gmail.com
* Use relation name instead of OID in query jumbling for RangeTblEntryMichael Paquier2025-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | custom_query_jumble (introduced in 5ac462e2b7ac as a node field attribute) is now assigned to the expanded reference name "eref" of RangeTblEntry, adding in the query jumble computation the non-qualified aliased relation name, without the list of column names. The relation OID is removed from the query jumbling. The effects of this change can be seen in the tests added by 3430215fe35f, where pg_stat_statements (PGSS) entries are now grouped using the relation name, ignoring the relation search_path may point at. For example, these two relations are different, but are now grouped in a single PGSS entry as they are assigned the same query ID: CREATE TABLE foo1.tab (a int); CREATE TABLE foo2.tab (b int); SET search_path = 'foo1'; SELECT count(*) FROM tab; SET search_path = 'foo2'; SELECT count(*) FROM tab; SELECT count(*) FROM foo1.tab; SELECT count(*) FROM foo2.tab; SELECT query, calls FROM pg_stat_statements WHERE query ~ 'FROM tab'; query | calls --------------------------+------- SELECT count(*) FROM tab | 4 (1 row) It is still possible to use an alias in the FROM clause to split these. This behavior is useful for relations re-created with the same name, where queries based on such relations would be grouped in the same PGSS entry. For permanent schemas, it should not really matter in practice. The main benefit is for workloads that use a lot of temporary relations, which are usually re-created with the same name continuously. These can be a heavy source of bloat in PGSS depending on the workload. Such entries can now be grouped together, improving the user experience. The original idea from Christoph Berg used catalog lookups to find temporary relations, something that the query jumble has never done, and it could cause some performance regressions. The idea to use RangeTblEntry.eref and the relation name, applying the same rules for all relations, temporary and not temporary, has been proposed by Tom Lane. The documentation additions have been suggested by Sami Imseih. Author: Michael Paquier <michael@paquier.xyz> Co-authored-by: Sami Imseih <samimseih@gmail.com> Reviewed-by: Christoph Berg <myon@debian.org> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Sami Imseih <samimseih@gmail.com> Discussion: https://postgr.es/m/Z9iWXKGwkm8RAC93@msg.df7cb.de
* Add pg_dump --with-{schema|data|statistics} options.Jeff Davis2025-03-25
| | | | | | | | | | By adding the positive variants of options, in addition to the negative variants that already exist, users can be explicit about what pg_dump should produce. Discussion: https://postgr.es/m/bd0513e4b1ea2b2f2d06f02720c6579711cb62a6.camel@j-davis.com Reviewed-by: Corey Huinker <corey.huinker@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de>
* pg_upgrade: Add --swap for faster file transfer.Nathan Bossart2025-03-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This new option instructs pg_upgrade to move the data directories from the old cluster to the new cluster and then to replace the catalog files with those generated for the new cluster. This mode can outperform --link, --clone, --copy, and --copy-file-range, especially on clusters with many relations. However, this mode creates many garbage files in the old cluster, which can prolong the file synchronization step if --sync-method=syncfs is used. To handle that, we recommend using --sync-method=fsync with this mode, and pg_upgrade internally uses "initdb --sync-only --no-sync-data-files" for file synchronization. pg_upgrade will synchronize the catalog files as they are transferred. We assume that the database files transferred from the old cluster were synchronized prior to upgrade. This mode also complicates reverting to the old cluster, so we recommend restoring from backup upon failure during or after file transfer. We did consider teaching pg_upgrade how to generate a revert script for such failures, but we decided against it due to the rarity of failing during file transfer, the complexity of generating the script, and the potential for misusing the script. The new mode is limited to clusters located in the same file system. With some effort, we could probably support upgrades between different file systems, but this mode is unlikely to offer much benefit if we have to copy the files across file system boundaries. It is also limited to upgrades from version 10 or newer. There are a few known obstacles for using swap mode to upgrade from older versions. For example, the visibility map format changed in v9.6, and the sequence tuple format changed in v10. In fact, swap mode omits the --sequence-data option in its uses of pg_dump and instead reuses the old cluster's sequence data files. While teaching swap mode to deal with these kinds of changes is surely possible (and we may have to deal with similar problems in the future, anyway), it doesn't seem worth the effort to support upgrades from long-unsupported versions. Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
* pg_dump: Add --sequence-data.Nathan Bossart2025-03-25
| | | | | | | | | | | | This new option instructs pg_dump to dump sequence data when the --no-data, --schema-only, or --statistics-only option is specified. This was originally considered for commit a7e5457db8, but it was left out at that time because there was no known use-case. A follow-up commit will use this to optimize pg_upgrade's file transfer step. Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
* initdb: Add --no-sync-data-files.Nathan Bossart2025-03-25
| | | | | | | | | | | | | | | | | | | | | | This new option instructs initdb to skip synchronizing any files in database directories, the database directories themselves, and the tablespace directories, i.e., everything in the base/ subdirectory and any other tablespace directories. Other files, such as those in pg_wal/ and pg_xact/, will still be synchronized unless --no-sync is also specified. --no-sync-data-files is primarily intended for internal use by tools that separately ensure the skipped files are synchronized to disk. A follow-up commit will use this to help optimize pg_upgrade's file transfer step. The --sync-method=fsync implementation of this option makes use of a new exclude_dir parameter for walkdir(). When not NULL, exclude_dir specifies a directory to skip processing. The --sync-method=syncfs implementation of this option just skips synchronizing the non-default tablespace directories. This means that initdb will still synchronize some or all of the database files, but there's not much we can do about that. Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
* Stats: use schemaname/relname instead of regclass.Jeff Davis2025-03-25
| | | | | | | | | | | | | | | For import and export, use schemaname/relname rather than regclass. This is more natural during export, fits with the other arguments better, and it gives better control over error handling in case we need to downgrade more errors to warnings. Also, use text for the argument types for schemaname, relname, and attname so that casts to "name" are not required. Author: Corey Huinker <corey.huinker@gmail.com> Discussion: https://postgr.es/m/CADkLM=ceOSsx_=oe73QQ-BxUFR2Cwqum7-UP_fPe22DBY0NerA@mail.gmail.com
* Minor doc update for commit 99f8f3fbbc.Jeff Davis2025-03-25
| | | | Author: Corey Huinker <corey.huinker@gmail.com>
* psql: Make default \watch interval configurableDaniel Gustafsson2025-03-25
| | | | | | | | | | | | | | | | | | | The default interval for \watch to wait between executing queries, when executed without a specified interval, was hardcoded to two seconds. This adds the new variable WATCH_INTERVAL which is used to set the default interval, making it configurable for the user. This makes \watch the first command which has a user configurable default setting. Author: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Kirill Reshke <reshkekirill@gmail.com> Reviewed-by: Masahiro Ikeda <ikedamsh@oss.nttdata.com> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Discussion: https://postgr.es/m/B2FD26B4-8F64-4552-A603-5CC3DF1C7103@yesql.se
* libpq: Deprecate pg_int64.Thomas Munro2025-03-25
| | | | | | | | | | | | | | | | | | | | | | | Previously we used pg_int64 in three function prototypes in libpq. It was added by commit 461ef73f to expose the platform-dependent type used for int64 in the C89 era. As of commit 962da900 it is defined as standard int64_t, and the dust seems to have settled. Let's just use int64_t directly in these three client-facing functions instead of (yet) another name. We've required C99 and thus <stdint.h> since PostgreSQL 12, C89 and C++98 compilers are long gone, and client applications very likely use standard types for their own 64-bit needs. This also cleans up the obscure placement of a new #include <stdint.h> directive in postgres_ext.h, required for the new definition. The typedef was hiding in there for historical reasons, but it doesn't fit postgres_ext.h's own description of its purpose and there is no evidence of client applications including postgres_ext.h directly to see it. Keep a typedef marked deprecated for backward compatibility, but move it into libpq-fe.h where it was used. Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Discussion: https://postgr.es/m/CA%2BhUKGKn_EkNNGMY5RzMcKP%2Ba6urT4JF%3DCPhw_zHtQwjvX6P2g%40mail.gmail.com
* Redefine max_files_per_process to control additionally opened filesAndres Freund2025-03-24
| | | | | | | | | | | | | | | | | | Until now max_files_per_process=N limited each backend to open N files in total (minus a safety factor), even if there were already more files opened in postmaster and inherited by backends. Change max_files_per_process to control how many additional files each process is allowed to open. The main motivation for this is the patch to add io_method=io_uring, which needs to open one file for each backend. Without this patch, even if RLIMIT_NOFILE is high enough, postmaster will fail in set_max_safe_fds() if started with a high max_connections. The cause of the failure is that, until now, set_max_safe_fds() subtracted the already open files from max_files_per_process. Reviewed-by: Noah Misch <noah@leadboat.com> Discussion: https://postgr.es/m/w6uiicyou7hzq47mbyejubtcyb2rngkkf45fk4q7inue5kfbeo@bbfad3qyubvs Discussion: https://postgr.es/m/CAGECzQQh6VSy3KG4pN1d=h9J=D1rStFCMR+t7yh_Kwj-g87aLQ@mail.gmail.com
* doc: Clarify required options for each action in pg_recvlogical.Fujii Masao2025-03-25
| | | | | | | | | | | | | | | | Each pg_recvlogical action requires specific options. For example, --slot, --dbname, and --file must be specified with the --start action. Previously, the documentation did not clearly outline these requirements. This commit updates the documentation to explicitly state the necessary options for each action. Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Co-authored-by: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> Reviewed-by: Vignesh C <vignesh21@gmail.com> Reviewed-by: David G. Johnston <david.g.johnston@gmail.com> Discussion: https://postgr.es/m/OSCPR01MB14966930B4357BAE8C9D68A8AF5C72@OSCPR01MB14966.jpnprd01.prod.outlook.com
* postgres_fdw: improve security checksPeter Eisentraut2025-03-24
| | | | | | | | | | | | | | | | | | | | | | | | | SCRAM pass-through should not bypass the FDW security check as it was implemented for postgres_fdw in commit 761c79508e7. This commit improves the security check by adding new SCRAM pass-through checks to ensure that the required SCRAM connection options are not overwritten by the user mapping or foreign server options. This is meant to match the security requirements for a password-using connection. Since libpq has no SCRAM-specific equivalent of PQconnectionUsedPassword(), we enforce this instead by making the use_scram_passthrough option of postgres_fdw imply require_auth=scram-sha-256. This means that if use_scram_passthrough is set, some situations that might otherwise have worked are preempted, for example GSSAPI with delegated credentials. This could be enhanced in the future if there is desire for more flexibility. Reported-by: Jacob Champion <jacob.champion@enterprisedb.com> Author: Matheus Alcantara <mths.dev@pm.me> Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com> Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/CAFY6G8ercA1KES%3DE_0__R9QCTR805TTyYr1No8qF8ZxmMg8z2Q%40mail.gmail.com
* Detect and Log multiple_unique_conflicts type conflict.Amit Kapila2025-03-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a new conflict type, multiple_unique_conflicts, to handle cases where an incoming row during logical replication violates multiple UNIQUE constraints. Previously, the apply worker detected and reported only the first encountered key conflict (insert_exists/update_exists), causing repeated failures as each constraint violation needs to be handled one by one making the process slow and error-prone. With this patch, the apply worker checks all unique constraints upfront once the first key conflict is detected and reports multiple_unique_conflicts if multiple violations exist. This allows users to resolve all conflicts at once by deleting all conflicting tuples rather than dealing with them individually or skipping the transaction. In the future, this will also allow us to specify different resolution handlers for such a conflict type. Add the stats for this conflict type in pg_stat_subscription_stats. Author: Nisha Moond <nisha.moond412@gmail.com> Author: Zhijie Hou <houzj.fnst@fujitsu.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Peter Smith <smithpb2250@gmail.com> Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com> Discussion: https://postgr.es/m/CABdArM7FW-_dnthGkg2s0fy1HhUB8C3ELA0gZX1kkbs1ZZoV3Q@mail.gmail.com
* Add GUC option to control maximum active replication origins.Masahiko Sawada2025-03-21
| | | | | | | | | | | | | | | | | This commit introduces a new GUC option max_active_replication_origins to control the maximum number of active replication origins. Previously, this was controlled by 'max_replication_slots'. Having a separate GUC option provides better flexibility for setting up subscribers, as they may not require replication slots (for cascading replication) but always require replication origins. Author: Euler Taveira <euler@eulerto.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Reviewed-by: Peter Eisentraut <peter@eisentraut.org> Reviewed-by: vignesh C <vignesh21@gmail.com> Discussion: https://postgr.es/m/b81db436-8262-4575-b7c4-bc0c1551000b@app.fastmail.com
* doc: Remove incorrect description about dropping replication slots.Fujii Masao2025-03-21
| | | | | | | | | | | | | | | | | | | | | | | | pg_drop_replication_slot() can drop replication slots created on a different database than the one where it is executed. This behavior has been in place since PostgreSQL 9.4, when pg_drop_replication_slot() was introduced. However, commit ff539d mistakenly added the following incorrect description in the documentation: For logical slots, this must be called when connected to the same database the slot was created on. This commit removes that incorrect statement. A similar mistake was also present in the documentation for the DROP_REPLICATION_SLOT command, which has now been corrected as well. Back-patch to all supported versions. Author: Hayato Kuroda <kuroda.hayato@fujitsu.com> Reviewed-by: Fujii Masao <masao.fujii@gmail.com> Discussion: https://postgr.es/m/OSCPR01MB14966C6BE304B5BB2E58D4009F5DE2@OSCPR01MB14966.jpnprd01.prod.outlook.com Backpatch-through: 13
* Add vacuum_truncate configuration parameter.Nathan Bossart2025-03-20
| | | | | | | | | | | | | | | | | | | | | | This new parameter works just like the storage parameter of the same name: if set to true (which is the default), autovacuum and VACUUM attempt to truncate any empty pages at the end of the table. It is primarily intended to help users avoid locking issues on hot standbys. The setting can be overridden with the storage parameter or VACUUM's TRUNCATE option. Since there's presently no way to determine whether a Boolean storage parameter is explicitly set or has just picked up the default value, this commit also introduces an isset_offset member to relopt_parse_elt. Suggested-by: Will Storey <will@summercat.com> Author: Nathan Bossart <nathandbossart@gmail.com> Co-authored-by: Gurjeet Singh <gurjeet@singh.im> Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at> Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> Reviewed-by: Robert Treat <rob@xzilla.net> Discussion: https://postgr.es/m/Z2DE4lDX4tHqNGZt%40dev.null