aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix portability issue in TAP tests of psql for localesMichael Paquier2022-06-08
| | | | | | | | | | | | | | | Some locales use a comma as decimal separator (like Czech or French), and psql's 001_basic.pl for \timing was not able to handle that properly. This fixes the matching regexes to be able to handle both comma and dot as possible decimal separators, as per a suggestion from Andrew Dunstan. psql tests were the only place with such a portability issue (check-world passed here with a forced LANG/LANGUAGE). These tests are new as of c0280bc, so there is no need for a backpatch. Reported-by: Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRBz8iQmd2aOaCLvO-rJY6vZr-h6Q0qvV0J+yb78J7uiaA@mail.gmail.com
* Restructure pg_upgrade output directories for better idempotenceMichael Paquier2022-06-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 38bfae3 has moved the contents written to files by pg_upgrade under a new directory called pg_upgrade_output.d/ located in the new cluster's data folder, and it used a simple structure made of two subdirectories leading to a fixed structure: log/ and dump/. This design has made weaker pg_upgrade on repeated calls, as we could get failures when creating one or more of those directories, while potentially losing the logs of a previous run (logs are retained automatically on failure, and cleaned up on success unless --retain is specified). So a user would need to clean up pg_upgrade_output.d/ as an extra step for any repeated calls of pg_upgrade. The most common scenario here is --check followed by the actual upgrade, but one could see a failure when specifying an incorrect input argument value. Removing entirely the logs would have the disadvantage of removing all the past information, even if --retain was specified at some past step. This result is annoying for a lot of users and automated upgrade flows. So, rather than requiring a manual removal of pg_upgrade_output.d/, this redesigns the set of output directories in a more dynamic way, based on a suggestion from Tom Lane and Daniel Gustafsson. pg_upgrade_output.d/ is still the base path, but a second directory level is added, mostly named after an ISO-8601-formatted timestamp (in short human-readable, with milliseconds appended to the name to avoid any conflicts). The logs and dumps are saved within the same subdirectories as previously, as of log/ and dump/, but these are located inside the subdirectory named after the timestamp. The logs of a given run are removed only after a successful run if --retain is not used, and pg_upgrade_output.d/ is kept if there are any logs from a previous run. Note that previously, pg_upgrade would have kept the logs even after a successful --check but that was inconsistent compared to the case without --check when using --retain. The code in charge of the removal of the output directories is now refactored into a single routine. Two TAP tests are added with some --check commands (one failure case and one success case), to look after the issue fixed here. Note that the tests had to be tweaked a bit to fit with the new directory structure so as it can find any logs generated on failure. This is still going to require a change in the buildfarm client for the case where pg_upgrade is tested without the TAP test, though, but I'll tackle that with a separate patch where needed. Reported-by: Tushar Ahuja Author: Michael Paquier Reviewed-by: Daniel Gustafsson, Justin Pryzby Discussion: https://postgr.es/m/77e6ecaa-2785-97aa-f229-4b6e047cbd2b@enterprisedb.com
* Harden Memoization code against broken data typesDavid Rowley2022-06-08
| | | | | | | | | | | | | | | | | | | | | Bug #17512 highlighted that a suitably broken data type could cause the backend to crash if either the hash function or equality function were in someway non-deterministic based on their input values. Such a data type could cause a crash of the backend due to some code which assumes that we'll always find a hash table entry corresponding to an item in the Memoize LRU list. Here we remove the assumption that we'll always find the entry corresponding to the given LRU list item and add run-time checks to verify we have found the given item in the cache. This is not a fix for bug #17512, but it will turn the crash reported by that bug report into an internal ERROR. Reported-by: Ales Zeleny Reviewed-by: Tom Lane Discussion: https://postgr.es/m/CAApHDvpxFSTwvoYWT7kmFVSZ9zLAeHb=S9vrz=RExMgSkQNWqw@mail.gmail.com Backpatch-through: 14, where Memoize was added.
* Fix off-by-one loop termination condition in pg_stat_get_subscription().Tom Lane2022-06-07
| | | | | | | | | | | | | | | | pg_stat_get_subscription scanned one more LogicalRepWorker array entry than is really allocated. In the worst case this could lead to SIGSEGV, if the LogicalRepCtx data structure is near the end of shared memory. That seems quite unlikely though (thanks to the ordering of calls in CreateSharedMemoryAndSemaphores) and we've heard no field reports of it. A more likely misbehavior is one row of garbage data in the function's result, but even that is not real likely because of the check that the pid field matches some live backend. Report and fix by Kuntal Ghosh. This bug is old, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAGz5QCJykEDzW6jQK6Yz7Qh_PMtD=95de_7QoocbVR2Qy8hWZA@mail.gmail.com
* Don't fail on libpq-generated error reports in pg_amcheck.Tom Lane2022-06-06
| | | | | | | | | | An error PGresult generated by libpq itself, such as a report of connection loss, won't have broken-down error fields. should_processing_continue() blithely assumed that PG_DIAG_SEVERITY_NONLOCALIZED would always be present, and would dump core if it wasn't. Per grepping to see if 6d157e7cb's mistake was repeated elsewhere.
* Don't fail on libpq-generated error reports in ecpg_raise_backend().Tom Lane2022-06-06
| | | | | | | | | | | | | | | | An error PGresult generated by libpq itself, such as a report of connection loss, won't have broken-down error fields. ecpg_raise_backend() blithely assumed that PG_DIAG_MESSAGE_PRIMARY would always be present, and would end up passing a NULL string pointer to snprintf when it isn't. That would typically crash before 3779ac62d, and it would fail to provide a useful error report in any case. Best practice is to substitute PQerrorMessage(conn) in such cases, so do that. Per bug #17421 from Masayuki Hirose. Back-patch to all supported branches. Discussion: https://postgr.es/m/17421-790ff887e3188874@postgresql.org
* Fix psql's single transaction mode on client-side errors with -c/-f switchesMichael Paquier2022-06-06
| | | | | | | | | | | | | | | | | | | | | | | psql --single-transaction is able to handle multiple -c and -f switches in a single transaction since d5563d7d, but this had the surprising behavior of forcing a transaction COMMIT even if psql failed with an error in the client (for example incorrect path given to \copy), which would generate an error, but still commit any changes that were already applied in the backend. This commit makes the behavior more consistent, by enforcing a transaction ROLLBACK if any commands fail, both client-side and backend-side, so as no changes are applied if one error happens in any of them. Some tests are added on HEAD to provide some coverage about all that. Backend-side errors are unreliable as IPC::Run can complain on SIGPIPE if psql quits before reading a query result, but that should work properly in the case where any errors come from psql itself, which is what the original report is about. Reported-by: Christoph Berg Author: Kyotaro Horiguchi, Michael Paquier Discussion: https://postgr.es/m/17504-76b68018e130415e@postgresql.org Backpatch-through: 10
* Automatically count the number of output lines in psql/help.c.Tom Lane2022-06-04
| | | | | | | | | | | | | | | | | | | | The hard-wired PageOutput arguments in usage() and sibling functions have been a perennial maintenance gotcha, and there's no reason to think we'll ever get any better about that. Let's get rid of those magic constants by constructing the output in a buffer where we can count the newlines before calling PageOutput. (Perhaps this is microscopically slower; but none of these functions are performance critical, and anyway we might well be buying back all the cost by avoiding having to pass most of the data through snprintf.c. I could not detect any speed difference in a desultory check.) This also gets rid of the need to assume that platform-specific variations in the output are insignificant. While at it, make the code shorter and more abstract by inventing helper macros HELP0() and HELPN() to encapsulate the specific output actions being invoked. Discussion: https://postgr.es/m/365160.1654289490@sss.pgh.pa.us
* Force run of pg_upgrade in the build directory in its TAP testMichael Paquier2022-06-04
| | | | | | | | | | | | | | | | | | | TAP tests are run from their own directory in the source tree, and in a VPATH build the execution of the pg_upgrade command was leaving behind a file in the source tree, that should be left untouched. In order to avoid this issue, the test moves to PostgreSQL::Test::Utils::tmp_check, so as any files generated by pg_upgrade do not impact the source tree, but the build tree. This has as nice side-effect to make unnessary the presence of such files in pg_upgrade's .gitignore and Makefile. This strategy is similar to psql's test 010_tab_completion.pl, though the reasons behind this choice are different. In passing, fix one misleading test name that was added by 99f6f19. Per discussion with Peter Eisentraut, Andrew Dunstan, Tom Lane, Andres Freund and myself. Discussion: https://postgr.es/m/f80ace33-11fb-1cd3-20f8-98f51d151088@enterprisedb.com
* Improve psql \?'s description of large-object-related commands.Tom Lane2022-06-03
| | | | | | | | | | | | | Provide a gloss of which command does what, as all other backslash commands have. Put the large-object command section into a more considered spot in the list. In passing, update the output-lines count in helpVariables() (oversight in 7844c9918, looks like). Thibaud Walkowiak, reviewed by Nathan Bossart and myself Discussion: https://postgr.es/m/43f0439c-df3e-a045-ac99-af33523cc2d4@dalibo.com
* Prohibit combining publications with different column lists.Amit Kapila2022-06-02
| | | | | | | | | | | | | | | | | | | | | | | | Currently, we simply combine the column lists when publishing tables on multiple publications and that can sometimes lead to unexpected behavior. Say, if a column is published in any row-filtered publication, then the values for that column are sent to the subscriber even for rows that don't match the row filter, as long as the row matches the row filter for any other publication, even if that other publication doesn't include the column. The main purpose of introducing a column list is to have statically different shapes on publisher and subscriber or hide sensitive column data. In both cases, it doesn't seem to make sense to combine column lists. So, we disallow the cases where the column list is different for the same table when combining publications. It can be later extended to combine the column lists for selective cases where required. Reported-by: Alvaro Herrera Author: Hou Zhijie Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
* Add missing test names in TAP tests of pg_upgradeMichael Paquier2022-06-02
| | | | | | | | While on it, this removes the inclusion of getcwd() as The test code does not rely on it. Author: Peter Eisentraut Discussion: https://postgr.es/m/f80ace33-11fb-1cd3-20f8-98f51d151088@enterprisedb.com
* Silence compiler warnings from some older compilers.Tom Lane2022-06-01
| | | | | | | | | | | | Since a117cebd6, some older gcc versions issue "variable may be used uninitialized in this function" complaints for brin_summarize_range. Silence that using the same coding pattern as in bt_index_check_internal; arguably, a117cebd6 had too narrow a view of which compilers might give trouble. Nathan Bossart and Tom Lane. Back-patch as the previous commit was. Discussion: https://postgr.es/m/20220601163537.GA2331988@nathanxps13
* Fix pl/perl test case so it will still work under Perl 5.36.Tom Lane2022-06-01
| | | | | | | | | | | | | | | | Perl 5.36 has reclassified the warning condition that this test case used, so that the expected error fails to appear. Tweak the test so it instead exercises a case that's handled the same way in all Perl versions of interest. This appears to meet our standards for back-patching into out-of-support branches: it changes no user-visible behavior but enables testing of old branches with newer tools. Hence, back-patch as far as 9.2. Dagfinn Ilmari Mannsåker, per report from Jitka Plesníková. Discussion: https://postgr.es/m/564579.1654093326@sss.pgh.pa.us
* Revert changes to CONCURRENTLY that "sped up" Xmin advanceAlvaro Herrera2022-05-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit d9d076222f5b "VACUUM: ignore indexing operations with CONCURRENTLY". These changes caused indexes created with the CONCURRENTLY option to miss heap tuples that were HOT-updated and HOT-pruned during the index creation. Before these changes, HOT pruning would have been prevented by the Xmin of the transaction creating the index, but because this change was precisely to allow the Xmin to move forward ignoring that backend, now other backends scanning the table can prune them. This is not a problem for VACUUM (which requires a lock that conflicts with a CREATE INDEX CONCURRENTLY operation), but HOT-prune can definitely occur. In other words, Xmin advancement was sped up, but at the cost of corrupting the resulting index. Regrettably, this means that the new feature in PG14 that RIC/CIC on very large tables no longer force VACUUM to retain very old tuples goes away. We might try to implement it again in a later release, but for now the risk of indexes missing tuples is too high and there's no easy fix. Backpatch to 14, where this change appeared. Reported-by: Peter Slavov <pet.slavov@gmail.com> Diagnosys-by: Andrey Borodin <x4mmm@yandex-team.ru> Diagnosys-by: Michael Paquier <michael@paquier.xyz> Diagnosys-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/17485-396609c6925b982d%40postgresql.org
* Ensure ParseTzFile() closes the input file after failing.Tom Lane2022-05-31
| | | | | | | | | | | | | | | We hadn't noticed this because (a) few people feed invalid timezone abbreviation files to the server, and (b) in typical scenarios guc.c would throw ereport(ERROR) and then transaction abort handling would silently clean up the leaked file reference. However, it was possible to observe file leakage warnings if one breaks an already-active abbreviation file, because guc.c does not throw ERROR when loading supposedly-validated settings during session start or SIGHUP processing. Report and fix by Kyotaro Horiguchi (cosmetic adjustments by me) Discussion: https://postgr.es/m/20220530.173740.748502979257582392.horikyota.ntt@gmail.com
* shm_mq_sendv: Fix flushing bug when receiver not yet attached.Robert Haas2022-05-31
| | | | | | | | | | | | | | | | | With the old logic, when the reciever had not yet attached, we would never call shm_mq_inc_bytes_written(), even if force_flush = true was specified. That could result in a situation where data that the sender believes it has sent is never received. Along the way, remove a useless function prototype for a nonexistent function from shm_mq.h. Commit 46846433a03dff4f2e08c8a161e54a842da360d6 introduced these problems. Pavan Deolasee, with a few changes by me. Discussion: https://postgr.es/m/CABOikdPkwtLLCTnzzmpSMXo3QZa2yXq0J7Q61ssdLFAJYrOVvQ@mail.gmail.com
* Fix typo in hash README.Amit Kapila2022-05-31
| | | | | Author: Peter Smith Discussion: https://postgr.es/m/CAHut+Pu-V22PiJF2ym9_NVZe-+qnycfyEX24dZm=7URWhDHJ3w@mail.gmail.com
* Remove useless tests for TRUNCATE on foreign tablesMichael Paquier2022-05-31
| | | | | | | | | | | | | | | foreign_data has kept around a set of tests for TRUNCATE to look after the case of foreign tables, with[out] inheritance and with[out] partitions, assuming that the command is not supported for this relkind. However, TRUNCATE is supported on foreign tables if the FDW involved is able to handle the command, like postgres_fdw. Note that postgres_fdw includes tests to cover all the cases removed by this commit (which had misleading comments), so these did not provide any additional coverage anyway. Author: Yugo Nagata Discussion: https://postgr.es/m/20220527172543.0a2fdb469cf048b81c0967d3@sraoss.co.jp
* Add debugging help in OwnLatch().Thomas Munro2022-05-31
| | | | | | | | | Build farm animal gharial recently failed a few times in a parallel worker's call to OwnLatch() with "ERROR: latch already owned". Let's turn that into a PANIC and show the PID of the owner, to try to learn more. Discussion: https://postgr.es/m/CA%2BhUKGJ_0RGcr7oUNzcHdn7zHqHSB_wLSd3JyS2YC_DYB%2B-V%3Dg%40mail.gmail.com
* Make STRING an unreserved_keyword.Tom Lane2022-05-30
| | | | | | | | | | | | | | | | | | | | | | Commit 1a36bc9db (SQL/JSON query functions) introduced STRING as a type_func_name_keyword, thereby breaking applications that use "string" as a table name, column name, function parameter name, etc. That seems like a pretty bad thing, not least because the SQL spec says that STRING is an unreserved keyword. This is easy enough to fix so far as the core grammar is concerned. However, doing so causes some ECPG test cases to fail, specifically those that use "string" as a typedef name. It turns out this is because portions of the ECPG grammar allow type_func_name_keywords but not unreserved_keywords as typedef names. That's pretty horrid, and it's mildly astonishing that we've not heard complaints about it before. We can fix two of those uses trivially, but the ones in the var_type production are less easy. As a stopgap, hard-code STRING as an allowed alternative in var_type. Per report from Alastair McKinley. Discussion: https://postgr.es/m/3661437.1653855582@sss.pgh.pa.us
* logging: Also add the command prefix to detail and hint messagesPeter Eisentraut2022-05-30
| | | | | | | This makes the output line up better and allows filtering messages by command. Discussion: https://www.postgresql.org/message-id/ba6d4fac-9e33-91f9-94fb-1e4c144a48b9@enterprisedb.com
* Fix COPY FROM when database encoding is SQL_ASCII.Heikki Linnakangas2022-05-29
| | | | | | | | | | | | | | | | In the codepath when no encoding conversion is required, the check for incomplete character at the end of input incorrectly used server encoding's max character length, instead of the client's. Usually the server and client encodings are the same when we're not performing encoding conversion, but SQL_ASCII is an exception. In the passing, also fix some outdated comments that still talked about the old COPY protocol. It was removed in v14. Per bug #17501 from Vitaly Voronov. Backpatch to v14 where this was introduced. Discussion: https://www.postgresql.org/message-id/17501-128b1dd039362ae6@postgresql.org
* Align stats_fetch_consistency definition with guc.c default.Andres Freund2022-05-28
| | | | | | | | Somewhat embarrassing oversight in 98f897339b0. Does not have a functional impact, but is unnecessarily confusing. Reported-By: Michael Paquier <michael@paquier.xyz> Discussion: https://postgr.es/m/Yo2351qVYqd/bJws@paquier.xyz
* Revert "Add single-item cache when looking at topmost XID of a subtrans XID"Michael Paquier2022-05-28
| | | | | | | | | | | | | | | | | | This reverts commit 06f5295 as per issues with this approach, both in terms of efficiency impact and stability. First, contrary to the single-item cache for transaction IDs in transam.c, the cache may finish by not be hit for a long time, and without an invalidation mechanism to clear it, it would cause inconsistent results on wraparound for example. Second, the use of SubTransGetTopmostTransaction() for the caching has a limited impact on performance. SubTransGetParent() could have more impact, though the benchmarking of the single-item approach still needs to be proved, particularly under the conditions where SLRU lookups are stressed in parallel with overflowed snapshots (aka more than 64 subxids generated, for example). After discussion with Andres Freund. Discussion: https://postgr.es/m/20220524235250.gtt3uu5zktfkr4hv@alap3.anarazel.de
* Handle NULL for short descriptions of custom GUC variablesMichael Paquier2022-05-28
| | | | | | | | | | | | | | | If a short description is specified as NULL in one of the various DefineCustomXXXVariable() functions available to external modules to define a custom parameter, SHOW ALL would crash. This change teaches SHOW ALL to properly handle NULL short descriptions, as well as any code paths that manipulate it, to gain in flexibility. Note that help_config.c was already able to do that, when describing a set of GUCs for postgres --describe-config. Author: Steve Chavez Reviewed by: Nathan Bossart, Andres Freund, Michael Paquier, Tom Lane Discussion: https://postgr.es/m/CAGRrpzY6hO-Kmykna_XvsTv8P2DshGiU6G3j8yGao4mk0CqjHA%40mail.gmail.com Backpatch-through: 10
* Teach remove_unused_subquery_outputs about window run conditionsDavid Rowley2022-05-27
| | | | | | | | | | | | | | | | | | | | | | | | | 9d9c02ccd added code to allow the executor to take shortcuts when quals on monotonic window functions guaranteed that once the qual became false it could never become true again. When possible, baserestrictinfo quals are converted to become these quals, which we call run conditions. Unfortunately, in 9d9c02ccd, I forgot to update remove_unused_subquery_outputs to teach it about these run conditions. This could cause a WindowFunc column which was unused in the target list but referenced by an upper-level WHERE clause to be removed from the subquery when the qual in the WHERE clause was converted into a window run condition. Because of this, the entire WindowClause would be removed from the query resulting in additional rows making it into the resultset when they should have been filtered out by the WHERE clause. Here we fix this by recording which target list items in the subquery have run conditions. That gets passed along to remove_unused_subquery_outputs to tell it not to remove these items from the target list. Bug: #17495 Reported-by: Jeremy Evans Reviewed-by: Richard Guo Discussion: https://postgr.es/m/17495-7ffe2fa0b261b9fa@postgresql.org
* Remove misguided SSL key file ownership check in libpq.Tom Lane2022-05-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commits a59c79564 et al. tried to sync libpq's SSL key file permissions checks with what we've used for years in the backend. We did not intend to create any new failure cases, but it turns out we did: restricting the key file's ownership breaks cases where the client is allowed to read a key file despite not having the identical UID. In particular a client running as root used to be able to read someone else's key file; and having seen that I suspect that there are other, less-dubious use cases that this restriction breaks on some platforms. We don't really need an ownership check, since if we can read the key file despite its having restricted permissions, it must have the right ownership --- under normal conditions anyway, and the point of this patch is that any additional corner cases where that works should be deemed allowable, as they have been historically. Hence, just drop the ownership check, and rearrange the permissions check to get rid of its faulty assumption that geteuid() can't be zero. (Note that the comparable backend-side code doesn't have to cater for geteuid() == 0, since the server rejects that very early on.) This does have the end result that the permissions safety check used for a root user's private key file is weaker than that used for anyone else's. While odd, root really ought to know what she's doing with file permissions, so I think this is acceptable. Per report from Yogendra Suralkar. Like the previous patch, back-patch to all supported branches. Discussion: https://postgr.es/m/MW3PR15MB3931DF96896DC36D21AFD47CA3D39@MW3PR15MB3931.namprd15.prod.outlook.com
* Avoid ERRCODE_INTERNAL_ERROR in oracle_compat.c functions.Tom Lane2022-05-26
| | | | | | | | | | | | | | | | | | | repeat() checked for integer overflow during its calculation of the required output space, but it just passed the resulting integer to palloc(). This meant that result sizes between 1GB and 2GB led to ERRCODE_INTERNAL_ERROR, "invalid memory alloc request size" rather than ERRCODE_PROGRAM_LIMIT_EXCEEDED, "requested length too large". That seems like a bit of a wart, so add an explicit AllocSizeIsValid check to make these error cases uniform. Do likewise in the sibling functions lpad() etc. While we're here, also modernize their overflow checks to use pg_mul_s32_overflow() etc instead of expensive divisions. Per complaint from Japin Li. This is basically cosmetic, so I don't feel a need to back-patch. Discussion: https://postgr.es/m/ME3P282MB16676ED32167189CB0462173B6D69@ME3P282MB1667.AUSP282.PROD.OUTLOOK.COM
* Add tab completion for table_rewrite's CREATE EVENT TRIGGER in psqlMichael Paquier2022-05-25
| | | | | Author: Hou Zhijie Discussion: https://postgr.es/m/OS0PR01MB5716DEFF787B925C4778228C94D69@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Fix stats_fetch_consistency default value indicated in postgresql.conf.sample.Andres Freund2022-05-24
| | | | | | | | | Mistake in 5891c7a8ed8, likely made when switching the default value from none to fetch during development. Reported-By: Nathan Bossart <nathandbossart@gmail.com> Author: Nathan Bossart <nathandbossart@gmail.com> Discussion: https://postgr.es/m/20220524220147.GA1298892@nathanxps13
* Remove duplicated words in comments of pgstat.c and pgstat_internal.hMichael Paquier2022-05-24
| | | | | | Author: Atsushi Torikoshi Reviewed-by: Nathan Bossart Discussion: https://postgr.es/m/d00ddbf29f9d09b3a471e64977560de1@oss.nttdata.com
* pg_upgrade: Tweak translatable stringsPeter Eisentraut2022-05-23
| | | | | | | | | "\r" (for progress output) must not be inside a translatable string (gettext gets upset). In passing, move the minimum supported version number to a separate argument, so that we don't have to retranslate this string every year now.
* psql: Update \timing also in case of an errorPeter Eisentraut2022-05-23
| | | | | | | | | | | | | | | The changes to show all query results (7844c9918) broke \timing output in case of an error; it didn't update the timing result and showed 0.000 ms. Fix by updating the timing result also in the error case. Also, for robustness, update the timing result any time a result is obtained, not only for the last, so a sensible value is always available. Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Author: Richard Guo <guofenglinux@gmail.com> Author: Fabien COELHO <coelho@cri.ensmp.fr> Discussion: https://www.postgresql.org/message-id/3813350.1652111765%40sss.pgh.pa.us
* Remove debug messages from tuplesort_sort_memtuples()John Naylor2022-05-23
| | | | | | | These were of value only during development. Reported by Justin Pryzby Discussion: https://www.postgresql.org/message-id/20220519201254.GU19626%40telsasoft.com
* pgstat: fix stats.spec instability on slow machines.Andres Freund2022-05-22
| | | | | | | | | | | | | | | On slow machines the modified test could end up switching the order in which transactional stats are reported in one session and non-transactional stats in another session. As stats handling of truncate is implemented as setting live/dead rows 0, the order in which a truncate's stats changes are applied, relative to normal stats updates, matters. The handling of stats for truncate hasn't changed due to shared memory stats, this is longstanding behavior. We might want to improve truncate's stats handling in the future, but for now just change the order of forced flushed to make the test stable. Reported-By: Christoph Berg <myon@debian.org> Discussion: https://postgr.es/m/YoZf7U/WmfmFYFEx@msg.df7cb.de
* Show 'AS "?column?"' explicitly when it's important.Tom Lane2022-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ruleutils.c was coded to suppress the AS label for a SELECT output expression if the column name is "?column?", which is the parser's fallback if it can't think of something better. This is fine, and avoids ugly clutter, so long as (1) nothing further up in the parse tree relies on that column name or (2) the same fallback would be assigned when the rule or view definition is reloaded. Unfortunately (2) is far from certain, both because ruleutils.c might print the expression in a different form from how it was originally written and because FigureColname's rules might change in future releases. So we shouldn't rely on that. Detecting exactly whether there is any outer-level use of a SELECT column name would be rather expensive. This patch takes the simpler approach of just passing down a flag indicating whether there *could* be any outer use; for example, the output column names of a SubLink are not referenceable, and we also do not care about the names exposed by the right-hand side of a setop. This is sufficient to suppress unwanted clutter in all but one case in the regression tests. That seems like reasonable evidence that it won't be too much in users' faces, while still fixing the cases we need to fix. Per bug #17486 from Nicolas Lutic. This issue is ancient, so back-patch to all supported branches. Discussion: https://postgr.es/m/17486-1ad6fd786728b8af@postgresql.org
* Remove unused-and-misspelled function extern declaration.Tom Lane2022-05-21
| | | | | | | | | | | Commit c65507763 added "extern XLogRecPtr CalculateMaxmumSafeLSN(void)", which bears no trace of connection to anything else in that patch or anywhere else. Remove it again. Sergei Kornilov (also spotted by Bharath Rupireddy) Discussion: https://postgr.es/m/706501646056870@vla3-6a5326aeb4ee.qloud-c.yandex.net Discussion: https://postgr.es/m/CALj2ACVoQ7NEf43Xz0rfxsGOKYTN5r4VZp2DO2_5p+CMzsRPFw@mail.gmail.com
* Avoid overflow hazard when clamping group counts to "long int".Tom Lane2022-05-21
| | | | | | | | | | | | | | | | | | | | | | | | Several places in the planner tried to clamp a double value to fit in a "long" by doing (long) Min(x, (double) LONG_MAX); This is subtly incorrect, because it casts LONG_MAX to double and potentially back again. If long is 64 bits then the double value is inexact, and the platform might round it up to LONG_MAX+1 resulting in an overflow and an undesirably negative output. While it's not hard to rewrite the expression into a safe form, let's put it into a common function to reduce the risk of someone doing it wrong in future. In principle this is a bug fix, but since the problem could only manifest with group count estimates exceeding 2^63, it seems unlikely that anyone has actually hit this or will do so anytime soon. We're fixing it mainly to satisfy fuzzer-type tools. That being the case, a HEAD-only fix seems sufficient. Andrey Lepikhov Discussion: https://postgr.es/m/ebbc2efb-7ef9-bf2f-1ada-d6ec48f70e58@postgrespro.ru
* Improve and fix some issues in the TAP tests of pg_upgradeMichael Paquier2022-05-21
| | | | | | | | | | | | | | | | | | This is based on a set of suggestions from Noah, with the following changes made: - The set of databases created in the tests are now prefixed with "regression" to not trigger any warnings with name restrictions when compiling the code with -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, and now only the first name checks after the Windows case of double quotes mixed with backslashes. - Fix an issue with EXTRA_REGRESS_OPTS, which were not processed in a way consistent with 027_stream_regress.pl (missing space between the option string and pg_regress). This got introduced in 7dd3ee5. - Add a check on the exit code of the pg_regress command, to catch failures after running the regression tests. Reviewed-by: Noah Misch Discussion: https://postgr.es/m/YoHhWD5vQzb2mmiF@paquier.xyz
* Remove portability hazard in unsafe_tests/sql/guc_privs.sql.Tom Lane2022-05-20
| | | | | | | | | | | | This new-in-v15 test case assumed it could set max_stack_depth as high as 2MB. You might think that'd be true on any modern platform but you'd be wrong, as I found out while experimenting with NetBSD/hppa. This test is about privileges not platform capabilities, so there seems no need to use any value greater than the 100kB setting already used in a couple of places in the core regression tests. There's certainly no call to expect people to raise their platform's default ulimit just to run this test.
* Fix DDL deparse of CREATE OPERATOR CLASSAlvaro Herrera2022-05-20
| | | | | | | | | | | | | When an implicit operator family is created, it wasn't getting reported. Make it do so. This has always been missing. Backpatch to 10. Author: Masahiko Sawada <sawada.mshk@gmail.com> Reported-by: Leslie LEMAIRE <leslie.lemaire@developpement-durable.gouv.fr> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Michael Paquiër <michael@paquier.xyz> Discussion: https://postgr.es/m/f74d69e151b22171e8829551b1159e77@developpement-durable.gouv.fr
* Add pg_version() to PostgreSQL::Test::ClusterMichael Paquier2022-05-20
| | | | | | | | | | | _pg_version (version number based on PostgreSQL::Version) is a field private to Cluster.pm but there was no helper routine to retrieve it from a Cluster's node. The same is done for install_path, for example, and the version object becomes handy when writing tests that need version-specific handling. Reviewed-by: Andrew Dunstan, Daniel Gustafsson Discussion: https://postgr.es/m/YoWfoJTc987tsxpV@paquier.xyz
* pg_waldump: Improve option parsing error messagesPeter Eisentraut2022-05-20
| | | | | | | | I rephrased the error messages to be more in the style of option_parse_int(), and also made use of the new "detail" message facility. I didn't actually use option_parse_int() (which could be used for -n) because libpgfeutils wasn't used here yet and I wanted to keep this just to string changes. But it could be done in the future.
* Repurpose PROC_COPYABLE_FLAGS as PROC_XMIN_FLAGSAlvaro Herrera2022-05-19
| | | | | | | | | | | | | | | This is a slight, convenient semantics change from what commit 0f0cfb494004 ("Fix parallel operations that prevent oldest xmin from advancing") introduced that lets us simplify the coding in the one place where it is used. Backpatch to 13. This is related to commit 6fea65508a1a ("Tighten ComputeXidHorizons' handling of walsenders") rewriting the code site where this is used, which has not yet been backpatched, but it may well be in the future. Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/202204191637.eldwa2exvguw@alvherre.pgsql
* Fix incorrect comments for Memoize structDavid Rowley2022-05-19
| | | | | | Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/0635f5aa-4973-8dc2-4e4e-df9fd5778a65@enterprisedb.com Backpatch-through: 14, where Memoize was added
* Extend pg_publication_tables to display column list and row filter.Amit Kapila2022-05-19
| | | | | | | | | | | | | | | Commit 923def9a53 and 52e4f0cd47 allowed to specify column lists and row filters for publication tables. This commit extends the pg_publication_tables view and pg_get_publication_tables function to display that information. This information will be useful to users and we also need this for the later commit that prohibits combining multiple publications with different column lists for the same table. Author: Hou Zhijie Reviewed By: Amit Kapila, Alvaro Herrera, Shi Yu, Takamichi Osumi Discussion: https://postgr.es/m/202204251548.mudq7jbqnh7r@alvherre.pgsql
* Update xml_1.out and xml_2.outAlvaro Herrera2022-05-18
| | | | Commit 0fbf01120023 should have updated them but didn't.
* Fix EXPLAIN MERGE output when no tuples are processedAlvaro Herrera2022-05-18
| | | | | | | | | An 'else' clause was misplaced in commit 598ac10be1c2, making zero-rows output look a bit silly. Add a test case for it. Pointed out by Tom Lane. Discussion: https://postgr.es/m/21030.1652893083@sss.pgh.pa.us
* Check column list length in XMLTABLE/JSON_TABLE aliasAlvaro Herrera2022-05-18
| | | | | | | | | | | | | | | | | We weren't checking the length of the column list in the alias clause of an XMLTABLE or JSON_TABLE function (a "tablefunc" RTE), and it was possible to make the server crash by passing an overly long one. Fix it by throwing an error in that case, like the other places that deal with alias lists. In passing, modify the equivalent test used for join RTEs to look like the other ones, which was different for no apparent reason. This bug came in when XMLTABLE was born in version 10; backpatch to all stable versions. Reported-by: Wang Ke <krking@zju.edu.cn> Discussion: https://postgr.es/m/17480-1c9d73565bb28e90@postgresql.org