aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix CREATE INDEX CONCURRENTLY for simultaneous prepared transactions.Noah Misch2021-01-30
| | | | | | | | | | | | | | | In a cluster having used CREATE INDEX CONCURRENTLY while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. Fix this for future CREATE INDEX CONCURRENTLY by making it wait for prepared transactions like it waits for ordinary transactions. This expands the VirtualTransactionId structure domain to admit prepared transactions. It may be necessary to reindex to recover from past occurrences. Back-patch to 9.5 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Tom Lane and Michael Paquier. Discussion: https://postgr.es/m/2E712143-97F7-4890-B470-4A35142ABC82@yandex-team.ru
* Fix typoPeter Eisentraut2021-01-29
|
* Adjust comments of CheckRelationTableSpaceMove() and SetRelationTableSpace()Michael Paquier2021-01-29
| | | | | | | | | | | | | | | | | | | 4c9c359, that introduced those two functions, has been overoptimistic on the point that only ShareUpdateExclusiveLock would be required when moving a relation to a new tablespace. AccessExclusiveLock is a requirement, but ShareUpdateExclusiveLock may be used under specific conditions like REINDEX CONCURRENTLY where waits on past transactions make the operation safe even with a lower-level lock. The current code does only the former, so update the existing comments to reflect that. Once a REINDEX (TABLESPACE) is introduced, those comments would require an extra refresh to mention their new use case. While on it, fix an incorrect variable name. Per discussion with Álvaro Herrera. Discussion: https://postgr.es/m/20210127140741.GA14174@alvherre.pgsql
* Retire pg_standby.Thomas Munro2021-01-29
| | | | | | | | | | | | | | pg_standby was useful more than a decade ago, but now it is obsolete. It has been proposed that we retire it many times. Now seems like a good time to finally do it, because "waiting restore commands" are incompatible with a proposed recovery prefetching feature. Discussion: https://postgr.es/m/20201029024412.GP5380%40telsasoft.com Author: Justin Pryzby <pryzby@telsasoft.com> Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
* Silence another gcc 11 warning.Tom Lane2021-01-28
| | | | | | | | Per buildfarm and local experimentation, bleeding-edge gcc isn't convinced that the MemSet in reorder_function_arguments() is safe. Shut it up by adding an explicit check that pronargs isn't negative, and by changing MemSet to memset. (It appears that either change is enough to quiet the warning at -O2, but let's do both to be sure.)
* Remove bogus restriction from BEFORE UPDATE triggersAlvaro Herrera2021-01-28
| | | | | | | | | | | | | | | | | | | | | | In trying to protect the user from inconsistent behavior, commit 487e9861d0cf "Enable BEFORE row-level triggers for partitioned tables" tried to prevent BEFORE UPDATE FOR EACH ROW triggers from moving the row from one partition to another. However, it turns out that the restriction is wrong in two ways: first, it fails spuriously, preventing valid situations from working, as in bug #16794; and second, they don't protect from any misbehavior, because tuple routing would cope anyway. Fix by removing that restriction. We keep the same restriction on BEFORE INSERT FOR EACH ROW triggers, though. It is valid and useful there. In the future we could remove it by having tuple reroute work for inserts as it does for updates. Backpatch to 13. Author: Álvaro Herrera <alvherre@alvh.no-ip.org> Reported-by: Phillip Menke <pg@pmenke.de> Discussion: https://postgr.es/m/16794-350a655580fbb9ae@postgresql.org
* Fix hash partition pruning with asymmetric partition sets.Tom Lane2021-01-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | perform_pruning_combine_step() was not taught about the number of partition indexes used in hash partitioning; more embarrassingly, get_matching_hash_bounds() also had it wrong. These errors are masked in the common case where all the partitions have the same modulus and no partition is missing. However, with missing or unequal-size partitions, we could erroneously prune some partitions that need to be scanned, leading to silently wrong query answers. While a minimal-footprint fix for this could be to export get_partition_bound_num_indexes and make the incorrect functions use it, I'm of the opinion that that function should never have existed in the first place. It's not reasonable data structure design that PartitionBoundInfoData lacks any explicit record of the length of its indexes[] array. Perhaps that was all right when it could always be assumed equal to ndatums, but something should have been done about it as soon as that stopped being true. Putting in an explicit "nindexes" field makes both partition_bounds_equal() and partition_bounds_copy() simpler, safer, and faster than before, and removes explicit knowledge of the number-of-partition-indexes rules from some other places too. This change also makes get_hash_partition_greatest_modulus obsolete. I left that in place in case any external code uses it, but no core code does anymore. Per bug #16840 from Michał Albrycht. Back-patch to v11 where the hash partitioning code came in. (In the back branches, add the new field at the end of PartitionBoundInfoData to minimize ABI risks.) Discussion: https://postgr.es/m/16840-571a22976f829ad4@postgresql.org
* Make ecpg's rjulmdy() and rmdyjul() agree with their declarations.Tom Lane2021-01-28
| | | | | | | | | | | | | We had "short *mdy" in the extern declarations, but "short mdy[3]" in the actual function definitions. Per C99 these are equivalent, but recent versions of gcc have started to issue warnings about the inconsistency. Clean it up before the warnings get any more widespread. Back-patch, in case anyone wants to build older PG versions with bleeding-edge compilers. Discussion: https://postgr.es/m/2401575.1611764534@sss.pgh.pa.us
* pgbench: Remove dead codeAlvaro Herrera2021-01-28
| | | | | | | | | | doConnect() never returns connections in state CONNECTION_BAD, so checking for that is pointless. Remove the code that does. This code has been dead since ba708ea3dc84, 20 years ago. Discussion: https://postgr.es/m/20210126195224.GA20361@alvherre.pgsql Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
* Remove gratuitous uses of deprecated SELECT INTOPeter Eisentraut2021-01-28
| | | | | | | | | | CREATE TABLE AS has been preferred over SELECT INTO (outside of ecpg and PL/pgSQL) for a long time. There were still a few uses of SELECT INTO in tests and documentation, some old, some more recent. This changes them to CREATE TABLE AS. Some occurrences in the tests remain where they are specifically testing SELECT INTO parsing or similar. Discussion: https://www.postgresql.org/message-id/flat/96dc0df3-e13a-a85d-d045-d6e2c85218da%40enterprisedb.com
* Add direct conversion routines between EUC_TW and Big5.Heikki Linnakangas2021-01-28
| | | | | | | | | | | | | | | | | Conversions between EUC_TW and Big5 were previously implemented by converting the whole input to MIC first, and then from MIC to the target encoding. Implement functions to convert directly between the two. The reason to do this now is that I'm working on a patch that will change the conversion function signature so that if the input is invalid, we convert as much as we can and return the number of bytes successfully converted. That's not possible if we use an intermediary format, because if an error happens in the intermediary -> final conversion, we lose track of the location of the invalid character in the original input. Avoiding the intermediate step makes the conversions faster, too. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/b9e3167f-f84b-7aa4-5738-be578a4db924%40iki.fi
* Add mbverifystr() functions specific to each encoding.Heikki Linnakangas2021-01-28
| | | | | | | | | | | This makes pg_verify_mbstr() function faster, by allowing more efficient encoding-specific implementations. All the implementations included in this commit are pretty naive, they just call the same encoding-specific verifychar functions that were used previously, but that already gives a performance boost because the tight character-at-a-time loop is simpler. Reviewed-by: John Naylor Discussion: https://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01@iki.fi
* Don't add bailout adjustment for non-strict deserialize calls.Andrew Gierth2021-01-28
| | | | | | | | | | | | | | | | | | | | When building aggregate expression steps, strict checks need a bailout jump for when a null value is encountered, so there is a list of steps that require later adjustment. Adding entries to that list for steps that aren't actually strict would be harmless, except that there is an Assert which catches them. This leads to spurious errors on asserts builds, for data sets that trigger parallel aggregation of an aggregate with a non-strict deserialization function (no such aggregates exist in the core system). Repair by not adding the adjustment entry when it's not needed. Backpatch back to 11 where the code was introduced. Per a report from Darafei (Komzpa) of the PostGIS project; analysis and patch by me. Discussion: https://postgr.es/m/87mty7peb3.fsf@news-spur.riddles.org.uk
* Refactor SQL functions of SHA-2 in cryptohashfuncs.cMichael Paquier2021-01-28
| | | | | | | | | | | The same code pattern was repeated four times when compiling a SHA-2 hash. This refactoring has the advantage to issue a compilation warning if a new value is added to pg_cryptohash_type, so as anybody doing an addition in this area would need to consider if support for a new SQL function is needed or not. Author: Sehrope Sarkuni, Michael Paquier Discussion: https://postgr.es/m/YA7DvLRn2xnTgsMc@paquier.xyz
* Reduce the default value of vacuum_cost_page_miss.Peter Geoghegan2021-01-27
| | | | | | | | | | | | | | | | | | | | | | | | | When commit f425b605 introduced cost based vacuum delays back in 2004, the defaults reflected then-current trends in hardware, as well as certain historical limitations in PostgreSQL. There have been enormous improvements in both areas since that time. The cost limit GUC defaults finally became much more representative of current trends following commit cbccac37, which decreased autovacuum_vacuum_cost_delay's default by 10x for PostgreSQL 12 (it went from 20ms to only 2ms). The relative costs have shifted too. This should also be accounted for by the defaults. More specifically, the relative importance of avoiding dirtying pages within VACUUM has greatly increased, primarily due to main memory capacity scaling and trends in flash storage. Within Postgres itself, improvements like sequential access during index vacuuming (at least in nbtree and GiST indexes) have also been contributing factors. To reflect all this, decrease the default of vacuum_cost_page_miss to 2. Since the default of vacuum_cost_page_dirty remains 20, dirtying a page is now considered 10x "costlier" than a page miss by default. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-WzmLPFnkWT8xMjmcsm7YS3+_Qi3iRWAb2+_Bc8UhVyHfuA@mail.gmail.com
* In TrimCLOG(), don't reset XactCtl->shared->latest_page_number.Robert Haas2021-01-27
| | | | | | | | | | | | | | | | | | | | | Since the CLOG page number is not recorded directly in the checkpoint record, we have to use ShmemVariableCache->nextXid to figure out the latest CLOG page number at the start of recovery. However, as recovery progresses, replay of CLOG/EXTEND records will update our notion of the latest page number, and we should rely on that being accurate rather than recomputing the value based on an updated notion of nextXid. ShmemVariableCache->nextXid is only an approximation during recovery anyway, whereas CLOG/EXTEND records are an authoritative representation of how the SLRU has been updated. Commit 0fcc2decd485a61321a3220d8f76cb108b082009 makes this simplification possible, as before that change clog_redo() might have injected a bogus value here, and we'd want to get rid of that before entering normal running. Patch by me, reviewed by Heikki Linnakangas. Discussion: http://postgr.es/m/CA+TgmoZYig9+AQodhF5sRXuKkJ=RgFDugLr3XX_dz_F-p=TwTg@mail.gmail.com
* In clog_redo(), don't set XactCtl->shared->latest_page_number.Robert Haas2021-01-27
| | | | | | | | | | | | | | | | | | | | | | The comment is no longer accurate, and hasn't been entirely accurate since Hot Standby was introduced. The original idea here was that StartupCLOG() wouldn't be called until the end of recovery and therefore this value would be uninitialized when this code is reached, but Hot Standby made that true only when hot_standby=off, and commit 1f113abdf87cd085dee3927960bb4f70442b7250 means that this value is now always initialized before replay even starts. The original purpose of this code was to bypass the sanity check in SimpleLruTruncate(), which will no longer occur: now, if something is wrong, that sanity check might trip during recovery. That's probably a good thing, because in the current code base latest_page_number should always be initialized and therefore we expect that the sanity check should pass. If it doesn't, something has gone wrong, and complaining about it is appropriate. Patch by me, reviewed by Heikki Linnakangas. Discussion: http://postgr.es/m/CA+TgmoZYig9+AQodhF5sRXuKkJ=RgFDugLr3XX_dz_F-p=TwTg@mail.gmail.com
* Move StartupCLOG() calls to just after we initialize ShmemVariableCache.Robert Haas2021-01-27
| | | | | | | | | | | | | Previously, the hot_standby=off code path did this at end of recovery, while the hot_standby=on code path did it at the beginning of recovery. It's better to do this in only one place because (a) it's simpler, (b) StartupCLOG() is trivial so trying to postpone the work isn't useful, and (c) this will make it possible to simplify some other logic. Patch by me, reviewed by Heikki Linnakangas. Discussion: http://postgr.es/m/CA+TgmoZYig9+AQodhF5sRXuKkJ=RgFDugLr3XX_dz_F-p=TwTg@mail.gmail.com
* Fix GiST index deletion assert issue.Peter Geoghegan2021-01-26
| | | | | | | | | | | | | Avoid calling heap_index_delete_tuples() with an empty deltids array to avoid an assertion failure. This issue was arguably an oversight in commit b5f58cf2, though the failing assert itself was added by my recent commit d168b666. No backpatch, though, since the oversight is harmless in the back branches. Author: Peter Geoghegan <pg@bowt.ie> Reported-By: Jaime Casanova <jcasanov@systemguards.com.ec> Discussion: https://postgr.es/m/CAJKUy5jscES84n3puE=sYngyF+zpb4wv8UMtuLnLPv5z=6yyNw@mail.gmail.com
* Refactor code in tablecmds.c to check and process tablespace movesMichael Paquier2021-01-27
| | | | | | | | | | | | | | | | | | | | | | Two code paths of tablecmds.c (for relations with storage and without storage) use the same logic to check if the move of a relation to a new tablespace is allowed or not and to update pg_class.reltablespace and pg_class.relfilenode. A potential TABLESPACE clause for REINDEX, CLUSTER and VACUUM FULL needs similar checks to make sure that nothing is moved around in illegal ways (no mapped relations, shared relations only in pg_global, no move of temp tables owned by other backends). This reorganizes the existing code of ALTER TABLE so as all this logic is controlled by two new routines that can be reused for the other commands able to move relations across tablespaces, limiting the number of code paths in need of the same protections. This also removes some code that was duplicated for tables with and without storage for ALTER TABLE. Author: Alexey Kondratov, Michael Paquier Discussion: https://postgr.es/m/YA+9mAMWYLXJMVPL@paquier.xyz
* Rethink recently-added SPI interfaces.Tom Lane2021-01-26
| | | | | | | | | | | | | | SPI_execute_with_receiver and SPI_cursor_parse_open_with_paramlist are new in v14 (cf. commit 2f48ede08). Before they can get out the door, let's change their APIs to follow the practice recently established by SPI_prepare_extended etc: shove all optional arguments into a struct that callers are supposed to pre-zero. The hope is to allow future addition of more options without either API breakage or a continuing proliferation of new SPI entry points. With that in mind, choose slightly more generic names for them: SPI_execute_extended and SPI_cursor_parse_open respectively. Discussion: https://postgr.es/m/CAFj8pRCLPdDAETvR7Po7gC5y_ibkn_-bOzbeJb39WHms01194Q@mail.gmail.com
* Suppress compiler warnings from commit ee895a655.Tom Lane2021-01-26
| | | | | | | | | | | | | | | | | | | | | For obscure reasons, some buildfarm members are now generating complaints about plpgsql_call_handler's "retval" variable possibly being used uninitialized. It seems no less safe than it was before that commit, but these complaints are (mostly?) new. I trust that initializing the variable where it's declared will be enough to shut that up. I also notice that some compilers are warning about setjmp clobber of the same variable, which is maybe a bit more defensible. Mark it volatile to silence that. Also, rearrange the logic to give procedure_resowner a single point of initialization, in hopes of silencing some setjmp-clobber warnings about that. (Marking it volatile would serve too, but its sibling variables are depending on single assignment, so let's stick with that method.) Discussion: https://postgr.es/m/E1l4F1z-0000cN-Lx@gemulon.postgresql.org
* Code review for psql's helpSQL() function.Tom Lane2021-01-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The loops to identify word boundaries could access past the end of the input string. Likely that would never result in an actual crash, but it makes valgrind unhappy. The logic to try different numbers of words didn't work when the input has two words but we only have a match to the first, eg "\h with select". (We must "continue" the pass loop, not "break".) The logic to compute nl_count was bizarrely managed, and in at least two code paths could end up calling PageOutput with nl_count = 0, resulting in failing to paginate output that should have been fed to the pager. Also, in v12 and up, the nl_count calculation hadn't been updated to account for the addition of a URL. The PQExpBuffer holding the command syntax details wasn't freed, resulting in a session-lifespan memory leak. While here, improve some comments, choose a more descriptive name for a variable, fix inconsistent datatype choice for another variable. Per bug #16837 from Alexander Lakhin. This code is very old, so back-patch to all supported branches. Kyotaro Horiguchi and Tom Lane Discussion: https://postgr.es/m/16837-479bcd56040c71b3@postgresql.org
* Improve performance of repeated CALLs within plpgsql procedures.Tom Lane2021-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch essentially is cleaning up technical debt left behind by the original implementation of plpgsql procedures, particularly commit d92bc83c4. That patch (or more precisely, follow-on patches fixing its worst bugs) forced us to re-plan CALL and DO statements each time through, if we're in a non-atomic context. That wasn't for any fundamental reason, but just because use of a saved plan requires having a ResourceOwner to hold a reference count for the plan, and we had no suitable resowner at hand, nor would the available APIs support using one if we did. While it's not that expensive to create a "plan" for CALL/DO, the cycles do add up in repeated executions. This patch therefore makes the following API changes: * GetCachedPlan/ReleaseCachedPlan are modified to let the caller specify which resowner to use to pin the plan, rather than forcing use of CurrentResourceOwner. * spi.c gains a "SPI_execute_plan_extended" entry point that lets callers say which resowner to use to pin the plan. This borrows the idea of an options struct from the recently added SPI_prepare_extended, hopefully allowing future options to be added without more API breaks. This supersedes SPI_execute_plan_with_paramlist (which I've marked deprecated) as well as SPI_execute_plan_with_receiver (which is new in v14, so I just took it out altogether). * I also took the opportunity to remove the crude hack of letting plpgsql reach into SPI private data structures to mark SPI plans as "no_snapshot". It's better to treat that as an option of SPI_prepare_extended. Now, when running a non-atomic procedure or DO block that contains any CALL or DO commands, plpgsql creates a ResourceOwner that will be used to pin the plans of the CALL/DO commands. (In an atomic context, we just use CurrentResourceOwner, as before.) Having done this, we can just save CALL/DO plans normally, whether or not they are used across transaction boundaries. This seems to be good for something like 2X speedup of a CALL of a trivial procedure with a few simple argument expressions. By restricting the creation of an extra ResourceOwner like this, there's essentially zero penalty in cases that can't benefit. Pavel Stehule, with some further hacking by me Discussion: https://postgr.es/m/CAFj8pRCLPdDAETvR7Po7gC5y_ibkn_-bOzbeJb39WHms01194Q@mail.gmail.com
* Fix two typos in snapbuild.c.Andres Freund2021-01-25
| | | | | Reported-by: Heikki Linnakangas <hlinnaka@iki.fi> Discussion: https://postgr.es/m/c94be044-818f-15e3-1ad3-7a7ae2dfed0a@iki.fi
* Don't clobber the calling user's credentials cache in Kerberos test.Tom Lane2021-01-25
| | | | | | | | | Embarrassing oversight in this test script, which fortunately is not run by default. Report and patch by Jacob Champion. Discussion: https://postgr.es/m/1fcb175bafef6560f47a8c31229fa7c938486b8d.camel@vmware.com
* Fix broken ruleutils support for function TRANSFORM clauses.Tom Lane2021-01-25
| | | | | | | | I chanced to notice that this dumped core due to a faulty Assert. To add insult to injury, the output has been misformatted since v11. Obviously we need some regression testing here. Discussion: https://postgr.es/m/d1cc628c-3953-4209-957b-29427acc38c8@www.fastmail.com
* Remove CheckpointLock.Robert Haas2021-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up until now, we've held this lock when performing a checkpoint or restartpoint, but commit 076a055acf3c55314de267c62b03191586d79cf6 back in 2004 and commit 7e48b77b1cebb9a43f9fdd6b17128a0ba36132f9 from 2009, taken together, have removed all need for this. In the present code, there's only ever one process entitled to attempt a checkpoint: either the checkpointer, during normal operation, or the postmaster, during single-user operation. So, we don't need the lock. One possible concern in making this change is that it means that a substantial amount of code where HOLD_INTERRUPTS() was previously in effect due to the preceding LWLockAcquire() will now be running without that. This could mean that ProcessInterrupts() gets called in places from which it didn't before. However, this seems unlikely to do very much, because the checkpointer doesn't have any signal mapped to die(), so it's not clear how, for example, ProcDiePending = true could happen in the first place. Similarly with ClientConnectionLost and recovery conflicts. Also, if there are any such problems, we might want to fix them rather than reverting this, since running lots of code with interrupt handling suspended is generally bad. Patch by me, per an inquiry by Amul Sul. Review by Tom Lane and Michael Paquier. Discussion: http://postgr.es/m/CAAJ_b97XnBBfYeSREDJorFsyoD1sHgqnNuCi=02mNQBUMnA=FA@mail.gmail.com
* Remove duplicate includePeter Eisentraut2021-01-25
| | | | | Reported-by: Ashutosh Sharma <ashu.coek88@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAE9k0PkORqHHGKY54-sFyDpP90yAf%2B05Auc4fs9EAn4J%2BuBeUQ%40mail.gmail.com
* Fix hypothetical bug in heap backward scansDavid Rowley2021-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | Both heapgettup() and heapgettup_pagemode() incorrectly set the first page to scan in a backward scan in which the number of pages to scan was specified by heap_setscanlimits(). The code incorrectly started the scan at the end of the relation when startBlk was 0, or otherwise at startBlk - 1, neither of which is correct when only scanning a subset of pages. The fix here checks if heap_setscanlimits() has changed the number of pages to scan and if so we set the first page to scan as the final page in the specified range during backward scans. Proper adjustment of this code was forgotten when heap_setscanlimits() was added in 7516f5259 back in 9.5. However, practice, nowhere in core code performs backward scans after having used heap_setscanlimits(), yet, it is possible an extension uses the heap functions in this way, hence backpatch. An upcoming patch does use heap_setscanlimits() with backward scans, so this must be fixed before that can go in. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvpGc9h0_oVD2CtgBcxCS1N-qDYZSeBRnUh+0CWJA9cMaA@mail.gmail.com Backpatch-through: 9.5, all supported versions
* Fix ALTER PUBLICATION...DROP TABLE behavior.Amit Kapila2021-01-25
| | | | | | | | | | | | Commit 69bd60672 fixed the initialization of streamed transactions for RelationSyncEntry. It forgot to initialize the publication actions while invalidating the RelationSyncEntry due to which even though the relation is dropped from a particular publication we still publish its changes. Fix it by initializing pubactions when entry got invalidated. Author: Japin Li and Bharath Rupireddy Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CALj2ACV+0UFpcZs5czYgBpujM9p0Hg1qdOZai_43OU7bqHU_xw@mail.gmail.com
* Make storage/standby.h compile standalone again.Tom Lane2021-01-24
| | | | | | This file has failed headerscheck/cpluspluscheck verification since commit 0650ff230, as a result of referencing typedef TimestampTz without including the appropriate header.
* Update time zone data files to tzdata release 2021a.Tom Lane2021-01-24
| | | | | | | | DST law changes in Russia (Volgograd zone) and South Sudan. Historical corrections for Australia, Bahamas, Belize, Bermuda, Ghana, Israel, Kenya, Nigeria, Palestine, Seychelles, and Vanuatu. Notably, the Australia/Currie zone has been corrected to the point where it is identical to Australia/Hobart.
* Remove make_diff set of toolsMagnus Hagander2021-01-24
| | | | | | | These are mostly obsoleted by the switch to git, and it's easier to remove them than to update the incorrect documentation. Discussion: https://postgr.es/m/CABUevEwmASMn4WRJ6RagBx43sj10ctfMHcMA_-7KA3pDYmwpJw@mail.gmail.com
* Fix COPY FREEZE with CLOBBER_CACHE_ALWAYSTomas Vondra2021-01-24
| | | | | | | | | | | | | This adds code omitted from commit 7db0cd2145 by accident, which had two consequences. Firstly, only rows inserted by heap_multi_insert were frozen as expected when running COPY FREEZE, while heap_insert left rows unfrozen. That however includes rows in TOAST tables, so a lot of data might have been left unfrozen. Secondly, page might have been left partially empty after relcache invalidation. This addresses both of those issues. Discussion: https://postgr.es/m/CABOikdN-ptGv0mZntrK2Q8OtfUuAjqaYMGmkdU1dCKFtUxVLrg@mail.gmail.com
* Update ecpg's connect-test1 for connection-failure message changes.Tom Lane2021-01-23
| | | | | | | | | I should have updated this in commits 52a10224e and follow-ons, but I missed it because it's not run by default, and none of the buildfarm runs it either. Maybe we should try to improve that situation. Discussion: https://postgr.es/m/CAH2-Wz=j9SRW=s5BV4-3k+=tr4N3A03in+gTuVA09vNF+-iHjA@mail.gmail.com
* Introduce SHA1 implementations in the cryptohash infrastructureMichael Paquier2021-01-23
| | | | | | | | | | | | | | | | | | | | | | With this commit, SHA1 goes through the implementation provided by OpenSSL via EVP when building the backend with it, and uses as fallback implementation KAME which was located in pgcrypto and already shaped for an integration with a set of init, update and final routines. Structures and routines have been renamed to make things consistent with the fallback implementations of MD5 and SHA2. uuid-ossp has used for ages a shortcut with pgcrypto to fetch a copy of SHA1 if needed. This was built depending on the build options within ./configure, so this cleans up some code and removes the build dependency between pgcrypto and uuid-ossp. Note that this will help with the refactoring of HMAC, as pgcrypto offers the option to use MD5, SHA1 or SHA2, so only the second option was missing to make that possible. Author: Michael Paquier Reviewed-by: Heikki Linnakangas Discussion: https://postgr.es/m/X9HXKTgrvJvYO7Oh@paquier.xyz
* Suppress bison warning in ecpg grammar.Tom Lane2021-01-22
| | | | | | | | | opt_distinct_clause is only used in PLpgSQL_Expr, which ecpg ignores, so it needs to ignore opt_distinct_clause too. My oversight in 7cd9765f9; reported by Bruce Momjian. Discussion: https://postgr.es/m/E1l33wr-0005sJ-9n@gemulon.postgresql.org
* Avoid redundantly prefixing PQerrorMessage for a connection failure.Tom Lane2021-01-22
| | | | | | | | | | | | | libpq's error messages for connection failures pretty well stand on their own, especially since commits 52a10224e/27a48e5a1. Prefixing them with 'could not connect to database "foo"' or the like is just redundant, and perhaps even misleading if the specific database name isn't relevant to the failure. (When it is, we trust that the backend's error message will include the DB name.) Indeed, psql hasn't used any such prefix in a long time. So, make all our other programs and documentation examples agree with psql's practice. Discussion: https://postgr.es/m/1094524.1611266589@sss.pgh.pa.us
* Re-allow DISTINCT in pl/pgsql expressions.Tom Lane2021-01-22
| | | | | | | | | | | | | | | I'd omitted this from the grammar in commit c9d529848, figuring that it wasn't worth supporting. However we already have one complaint, so it seems that judgment was wrong. It doesn't require a huge amount of code, so add it back. (I'm still drawing the line at UNION/INTERSECT/EXCEPT though: those'd require an unreasonable amount of grammar refactoring, and the single-result-row restriction makes them near useless anyway.) Also rethink the documentation: this behavior is a property of all pl/pgsql expressions, not just assignments. Discussion: https://postgr.es/m/20210122134106.e94c5cd7@mail.verfriemelt.org
* Remove bogus tracepointPeter Eisentraut2021-01-22
| | | | | | | | | Calls to LWLockWaitForVar() fired the TRACE_POSTGRESQL_LWLOCK_ACQUIRE tracepoint, but LWLockWaitForVar() never actually acquires the LWLock. (Probably a copy/paste bug in 68a2e52bbaf.) Remove it. Author: Craig Ringer <craig.ringer@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/CAGRY4nxJo+-HCC2i5H93ttSZ4gZO-FSddCwvkb-qAfQ1zdXd1w@mail.gmail.com
* Move SSL information callback earlier to capture more informationMichael Paquier2021-01-22
| | | | | | | | | | | | | | | | The callback for retrieving state change information during connection setup was only installed when the connection was mostly set up, and thus didn't provide much information and missed all the details related to the handshake. This also extends the callback with SSL_state_string_long() to print more information about the state change within the SSL object handled. While there, fix some comments which were incorrectly referring to the callback and its previous location in fe-secure.c. Author: Daniel Gustafsson Discussion: https://postgr.es/m/232CF476-94E1-42F1-9408-719E2AEC5491@yesql.se
* Improve new wording of libpq's connection failure messages.Tom Lane2021-01-21
| | | | | | | | | | | | | "connection to server so-and-so failed:" seems clearer than the previous wording "could not connect to so-and-so:" (introduced by 52a10224e), because the latter suggests a network-level connection failure. We're now prefixing this string to all types of connection failures, for instance authentication failures; so we need wording that doesn't imply a low-level error. Per discussion with Robert Haas. Discussion: https://postgr.es/m/CA+TgmobssJ6rS22dspWnu-oDxXevGmhMD8VcRBjmj-b9UDqRjw@mail.gmail.com
* Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.Tom Lane2021-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, pull_varnos() took the relids of a PlaceHolderVar as being equal to the relids in its contents, but that fails to account for the possibility that we have to postpone evaluation of the PHV due to outer joins. This could result in a malformed plan. The known cases end up triggering the "failed to assign all NestLoopParams to plan nodes" sanity check in createplan.c, but other symptoms may be possible. The right value to use is the join level we actually intend to evaluate the PHV at. We can get that from the ph_eval_at field of the associated PlaceHolderInfo. However, there are some places that call pull_varnos() before the PlaceHolderInfos have been created; in that case, fall back to the conservative assumption that the PHV will be evaluated at its syntactic level. (In principle this might result in missing some legal optimization, but I'm not aware of any cases where it's an issue in practice.) Things are also a bit ticklish for calls occurring during deconstruct_jointree(), but AFAICS the ph_eval_at fields should have reached their final values by the time we need them. The main problem in making this work is that pull_varnos() has no way to get at the PlaceHolderInfos. We can fix that easily, if a bit tediously, in HEAD by passing it the planner "root" pointer. In the back branches that'd cause an unacceptable API/ABI break for extensions, so leave the existing entry points alone and add new ones with the additional parameter. (If an old entry point is called and encounters a PHV, it'll fall back to using the syntactic level, again possibly missing some valid optimization.) Back-patch to v12. The computation is surely also wrong before that, but it appears that we cannot reach a bad plan thanks to join order restrictions imposed on the subquery that the PlaceHolderVar came from. The error only became reachable when commit 4be058fe9 allowed trivial subqueries to be collapsed out completely, eliminating their join order restrictions. Per report from Stephan Springl. Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
* Fix initialization of FDW batching in ExecInitModifyTableTomas Vondra2021-01-21
| | | | | | | | | | ExecInitModifyTable has to initialize batching for all result relations, not just the first one. Furthermore, when junk filters were necessary, the pointer pointed past the mtstate->resultRelInfo array. Per reports from multiple non-x86 animals (florican, locust, ...). Discussion: https://postgr.es/m/20200628151002.7x5laxwpgvkyiu3q@development
* Switch "cl /?" to "cl /help" in MSVC scripts for platform detectionMichael Paquier2021-01-21
| | | | | | | | | | | | | | | | "cl /?" produces a different output if run on a real or a virtual drive (this can be set with a simple subst command), causing an error in the MSVC scripts if building on a virtual drive because the platform to use cannot be detected. "cl /help", on the contrary, produces a consistent output if used on a real or virtual drive. Changing to "/help" allows the compilation to work with a virtual drive as long as the top of the code repository is part of the drive, without impacting the build on real drives. Reported-by: Robert Grange Author: Juan José Santamaría Flecha Discussion: https://postgr.es/m/16825-c4f104bcebc67034@postgresql.org
* Implement support for bulk inserts in postgres_fdwTomas Vondra2021-01-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Extends the FDW API to allow batching inserts into foreign tables. That is usually much more efficient than inserting individual rows, due to high latency for each round-trip to the foreign server. It was possible to implement something similar in the regular FDW API, but it was inconvenient and there were issues with reporting the number of actually inserted rows etc. This extends the FDW API with two new functions: * GetForeignModifyBatchSize - allows the FDW picking optimal batch size * ExecForeignBatchInsert - inserts a batch of rows at once Currently, only INSERT queries support batching. Support for DELETE and UPDATE may be added in the future. This also implements batching for postgres_fdw. The batch size may be specified using "batch_size" option both at the server and table level. The initial patch version was written by me, but it was rewritten and improved in many ways by Takayuki Tsunakawa. Author: Takayuki Tsunakawa Reviewed-by: Tomas Vondra, Amit Langote Discussion: https://postgr.es/m/20200628151002.7x5laxwpgvkyiu3q@development
* psql \dX: list extended statistics objectsTomas Vondra2021-01-20
| | | | | | | | | | | | | | The new command lists extended statistics objects. All past releases with extended statistics are supported. This is a simplified version of commit 891a1d0bca, which had to be reverted due to not considering pg_statistic_ext_data is not accessible by regular users. Fields requiring access to this catalog were removed. It's possible to add them, but it'll require changes to core. Author: Tatsuro Yamada Reviewed-by: Julien Rouhaud, Alvaro Herrera, Tomas Vondra, Noriyoshi Shinoda Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
* Further tweaking of PG_SYSROOT heuristics for macOS.Tom Lane2021-01-20
| | | | | | | | | | | | | | | It emerges that in some phases of the moon (perhaps to do with directory entry order?), xcrun will report that the SDK path is /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk which is normally a symlink to a version-numbered sibling directory. Our heuristic to skip non-version-numbered pathnames was rejecting that, which is the wrong thing to do. We'd still like to end up with a version-numbered PG_SYSROOT value, but we can have that by dereferencing the symlink. Like the previous fix, back-patch to all supported versions. Discussion: https://postgr.es/m/522433.1611089678@sss.pgh.pa.us
* Fix bug in detecting concurrent page splits in GiST insertHeikki Linnakangas2021-01-20
| | | | | | | | | | | | | | | | | In commit 9eb5607e699, I got the condition on checking for split or deleted page wrong: I used && instead of ||. The comment correctly said "concurrent split _or_ deletion". As a result, GiST insertion could miss a concurrent split, and insert to wrong page. Duncan Sands demonstrated this with a test script that did a lot of concurrent inserts. Backpatch to v12, where this was introduced. REINDEX is required to fix indexes that were affected by this bug. Backpatch-through: 12 Reported-by: Duncan Sands Discussion: https://www.postgresql.org/message-id/a9690483-6c6c-3c82-c8ba-dc1a40848f11%40deepbluecap.com