aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/cache
Commit message (Collapse)AuthorAge
...
* meson: Add initial version of meson based build systemAndres Freund2022-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Autoconf is showing its age, fewer and fewer contributors know how to wrangle it. Recursive make has a lot of hard to resolve dependency issues and slow incremental rebuilds. Our home-grown MSVC build system is hard to maintain for developers not using Windows and runs tests serially. While these and other issues could individually be addressed with incremental improvements, together they seem best addressed by moving to a more modern build system. After evaluating different build system choices, we chose to use meson, to a good degree based on the adoption by other open source projects. We decided that it's more realistic to commit a relatively early version of the new build system and mature it in tree. This commit adds an initial version of a meson based build system. It supports building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD, Solaris and Windows (however only gcc is supported on aix, solaris). For Windows/MSVC postgres can now be built with ninja (faster, particularly for incremental builds) and msbuild (supporting the visual studio GUI, but building slower). Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM bitcode generation, documentation adjustments) are done in subsequent commits requiring further review. Other aspects (e.g. not installing test-only extensions) are not yet addressed. When building on Windows with msbuild, builds are slower when using a visual studio version older than 2019, because those versions do not support MultiToolTask, required by meson for intra-target parallelism. The plan is to remove the MSVC specific build system in src/tools/msvc soon after reaching feature parity. However, we're not planning to remove the autoconf/make build system in the near future. Likely we're going to keep at least the parts required for PGXS to keep working around until all supported versions build with meson. Some initial help for postgres developers is at https://wiki.postgresql.org/wiki/Meson With contributions from Thomas Munro, John Naylor, Stone Tickle and others. Author: Andres Freund <andres@anarazel.de> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Author: Peter Eisentraut <peter@eisentraut.org> Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
* Harmonize more parameter names in bulk.Peter Geoghegan2022-09-20
| | | | | | | | | | | | | | | | Make sure that function declarations use names that exactly match the corresponding names from function definitions in optimizer, parser, utility, libpq, and "commands" code, as well as in remaining library code. Do the same for all code related to frontend programs (with the exception of pg_dump/pg_dumpall related code). Like other recent commits that cleaned up function parameter names, this commit was written with help from clang-tidy. Later commits will handle ecpg and pg_dump/pg_dumpall. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: David Rowley <dgrowleyml@gmail.com> Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
* Split up guc.c for better build speed and ease of maintenance.Tom Lane2022-09-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | guc.c has grown to be one of our largest .c files, making it a bottleneck for compilation. It's also acquired a bunch of knowledge that'd be better kept elsewhere, because of our not very good habit of putting variable-specific check hooks here. Hence, split it up along these lines: * guc.c itself retains just the core GUC housekeeping mechanisms. * New file guc_funcs.c contains the SET/SHOW interfaces and some SQL-accessible functions for GUC manipulation. * New file guc_tables.c contains the data arrays that define the built-in GUC variables, along with some already-exported constant tables. * GUC check/assign/show hook functions are moved to the variable's home module, whenever that's clearly identifiable. A few hard- to-classify hooks ended up in commands/variable.c, which was already a home for miscellaneous GUC hook functions. To avoid cluttering a lot more header files with #include "guc.h", I also invented a new header file utils/guc_hooks.h and put all the GUC hook functions' declarations there, regardless of their originating module. That allowed removal of #include "guc.h" from some existing headers. The fallout from that (hopefully all caught here) demonstrates clearly why such inclusions are best minimized: there are a lot of files that, for example, were getting array.h at two or more levels of remove, despite not having any connection at all to GUCs in themselves. There is some very minor code beautification here, such as renaming a couple of inconsistently-named hook functions and improving some comments. But mostly this just moves code from point A to point B and deals with the ensuing needs for #include adjustments and exporting a few functions that previously weren't exported. Patch by me, per a suggestion from Andres Freund; thanks also to Michael Paquier for the idea to invent guc_funcs.c. Discussion: https://postgr.es/m/587607.1662836699@sss.pgh.pa.us
* Make role grant system more consistent with other privileges.Robert Haas2022-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, membership of role A in role B could be recorded in the catalog tables only once. This meant that a new grant of role A to role B would overwrite the previous grant. For other object types, a new grant of permission on an object - in this case role A - exists along side the existing grant provided that the grantor is different. Either grant can be revoked independently of the other, and permissions remain so long as at least one grant remains. Make role grants work similarly. Previously, when granting membership in a role, the superuser could specify any role whatsoever as the grantor, but for other object types, the grantor of record must be either the owner of the object, or a role that currently has privileges to perform a similar GRANT. Implement the same scheme for role grants, treating the bootstrap superuser as the role owner since roles do not have owners. This means that attempting to revoke a grant, or admin option on a grant, can now fail if there are dependent privileges, and that CASCADE can be used to revoke these. It also means that you can't grant ADMIN OPTION on a role back to a user who granted it directly or indirectly to you, similar to how you can't give WITH GRANT OPTION on a privilege back to a role which granted it directly or indirectly to you. Previously, only the superuser could specify GRANTED BY with a user other than the current user. Relax that rule to allow the grantor to be any role whose privileges the current user posseses. This doesn't improve compatibility with what we do for other object types, where support for GRANTED BY is entirely vestigial, but it makes this feature more usable and seems to make sense to change at the same time we're changing related behaviors. Along the way, fix "ALTER GROUP group_name ADD USER user_name" to require the same privileges as "GRANT group_name TO user_name". Previously, CREATEROLE privileges were sufficient for either, but only the former form was permissible with ADMIN OPTION on the role. Now, either CREATEROLE or ADMIN OPTION on the role suffices for either spelling. Patch by me, reviewed by Stephen Frost. Discussion: http://postgr.es/m/CA+TgmoaFr-RZeQ+WoQ5nKPv97oT9+aDgK_a5+qWHSgbDsMp1Vg@mail.gmail.com
* Move common catalog cache access routines to lsyscache.cAmit Kapila2022-08-02
| | | | | | | | | In passing, move pg_relation_is_publishable next to similar functions. Suggested-by: Alvaro Herrera Author: Amit Kapila Reviewed-by: Hou Zhijie Discussion: https://postgr.es/m/CAHut+PupQ5UW9A9ut0Yjt21J9tHhx958z5L0k8-9hTYf_NYqxA@mail.gmail.com
* Use TRUNCATE to preserve relfilenode for pg_largeobject + index.Robert Haas2022-07-28
| | | | | | | | | | | | | | | | | | | | | | Commit 9a974cbcba005256a19991203583a94b4f9a21a9 arranged to preserve the relfilenode of user tables across pg_upgrade, but failed to notice that pg_upgrade treats pg_largeobject as a user table and thus it needs the same treatment. Otherwise, large objects will appear to vanish after a pg_upgrade. Commit d498e052b4b84ae21b3b68d5b3fda6ead65d1d4d fixed this problem by teaching pg_dump to UPDATE pg_class.relfilenode for pg_largeobject and its index. However, because an UPDATE on the catalog rows doesn't change anything on disk, this can leave stray files behind in the new cluster. They will normally be empty, but it's a little bit untidy. Hence, this commit arranges to do the same thing using DDL. Specifically, it makes TRUNCATE work for the pg_largeobject catalog when in binary-upgrade mode, and it then uses that command in binary-upgrade dumps as a way of setting pg_class.relfilenode for pg_largeobject and its index. That way, the old files are removed from the new cluster. Discussion: http://postgr.es/m/CA+TgmoYYMXGUJO5GZk1-MByJGu_bB8CbOL6GJQC8=Bzt6x6vDg@mail.gmail.com
* Fix read_relmap_file() concurrency on Windows.Robert Haas2022-07-27
| | | | | | | | | | | | Commit d8cd0c6c95c0120168df93aae095df4e0682a08a introduced a file rename that could fail on Windows, probably due to other backends having an open file handle to the old file of the same name. Re-arrange the locking slightly to prevent that, by making sure the open() and close() run while we hold the lock. Thomas Munro. I added an explanatory comment. Discussion: https://postgr.es/m/CA%2BhUKGLZtCTgp4NTWV-wGbR2Nyag71%3DEfYTKjDKnk%2BfkhuFMHw%40mail.gmail.com
* Remove the restriction that the relmap must be 512 bytes.Robert Haas2022-07-26
| | | | | | | | | | | | | Instead of relying on the ability to atomically overwrite the entire relmap file in one shot, write a new one and durably rename it into place. Removing the struct padding and the calculation showing why the map is exactly 512 bytes, and change the maximum number of entries to a nearby round number. Patch by me, reviewed by Andres Freund and Dilip Kumar. Discussion: http://postgr.es/m/CA+TgmoZq5%3DLWDK7kHaUbmWXxcaTuw_QwafgG9dr-BaPym_U8WQ%40mail.gmail.com Discussion: http://postgr.es/m/CAFiTN-ttOXLX75k_WzRo9ar=VvxFhrHi+rJxns997F+yvkm==A@mail.gmail.com
* Fix omissions in support for the "regcollation" type.Tom Lane2022-07-17
| | | | | | | | | | | | | The patch that added regcollation doesn't seem to have been too thorough about supporting it everywhere that other reg* types are supported. Fix that. (The find_expr_references omission is moderately serious, since it could result in missing expression dependencies. The others are less exciting.) Noted while fixing bug #17483. Back-patch to v13 where regcollation was added. Discussion: https://postgr.es/m/1423433.1652722406@sss.pgh.pa.us
* Change internal RelFileNode references to RelFileNumber or RelFileLocator.Robert Haas2022-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have been using the term RelFileNode to refer to either (1) the integer that is used to name the sequence of files for a certain relation within the directory set aside for that tablespace/database combination; or (2) that value plus the OIDs of the tablespace and database; or occasionally (3) the whole series of files created for a relation based on those values. Using the same name for more than one thing is confusing. Replace RelFileNode with RelFileNumber when we're talking about just the single number, i.e. (1) from above, and with RelFileLocator when we're talking about all the things that are needed to locate a relation's files on disk, i.e. (2) from above. In the places where we refer to (3) as a relfilenode, instead refer to "relation storage". Since there is a ton of SQL code in the world that knows about pg_class.relfilenode, don't change the name of that column, or of other SQL-facing things that derive their name from it. On the other hand, do adjust closely-related internal terminology. For example, the structure member names dbNode and spcNode appear to be derived from the fact that the structure itself was called RelFileNode, so change those to dbOid and spcOid. Likewise, various variables with names like rnode and relnode get renamed appropriately, according to how they're being used in context. Hopefully, this is clearer than before. It is also preparation for future patches that intend to widen the relfilenumber fields from its current width of 32 bits. Variables that store a relfilenumber are now declared as type RelFileNumber rather than type Oid; right now, these are the same, but that can now more easily be changed. Dilip Kumar, per an idea from me. Reviewed also by Andres Freund. I fixed some whitespace issues, changed a couple of words in a comment, and made one other minor correction. Discussion: http://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com Discussion: http://postgr.es/m/CAFiTN-vTe79M8uDH1yprOU64MNFE+R3ODRuA+JWf27JbhY4hJw@mail.gmail.com
* Add construct_array_builtin, deconstruct_array_builtinPeter Eisentraut2022-07-01
| | | | | | | | | | | | | | | There were many calls to construct_array() and deconstruct_array() for built-in types, for example, when dealing with system catalog columns. These all hardcoded the type attributes necessary to pass to these functions. To simplify this a bit, add construct_array_builtin(), deconstruct_array_builtin() as wrappers that centralize this hardcoded knowledge. This simplifies many call sites and reduces the amount of hardcoded stuff that is spread around. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/2914356f-9e5f-8c59-2995-5997fc48bcba%40enterprisedb.com
* Avoid unnecessary MemSet callPeter Eisentraut2022-06-30
| | | | | | | | | | The variable in question was changed from a struct to a pointer some time ago (77947c51c08). Using MemSet to zero it still works but is obviously unidiomatic and confusing, so change it to a straight assignment. Author: Ranier Vilela <ranier.vf@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAEudQApCeq4JjW1BdnwU=m=-DvG5WyUik0Yfn3p6UNphiHjj+w@mail.gmail.com
* Revert changes in HOT handling of BRIN indexesTomas Vondra2022-06-16
| | | | | | | | | | | | | | | | | | | | | | | | This reverts commits 5753d4ee32 and fe60b67250 that modified HOT to ignore BRIN indexes. The commit message for 5753d4ee32 claims that: When determining whether an index update may be skipped by using HOT, we can ignore attributes indexed only by BRIN indexes. There are no index pointers to individual tuples in BRIN, and the page range summary will be updated anyway as it relies on visibility info. This is partially incorrect - it's true BRIN indexes don't point to individual tuples, so HOT chains are not an issue, but the visibitlity info is not sufficient to keep the index up to date. This can easily result in corrupted indexes, as demonstrated in the hackers thread. This does not mean relaxing the HOT restrictions for BRIN is a lost cause, but it needs to handle the two aspects (allowing HOT chains and updating the page range summaries) as separate. But that requires a major changes, and it's too late for that in the current dev cycle. Reported-by: Tomas Vondra Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
* Pre-beta mechanical code beautification.Tom Lane2022-05-12
| | | | | Run pgindent, pgperltidy, and reformat-dat-files. I manually fixed a couple of comments that pgindent uglified.
* Remove extraneous blank lines before block-closing bracesAlvaro Herrera2022-04-13
| | | | | | | | | These are useless and distracting. We wouldn't have written the code with them to begin with, so there's no reason to keep them. Author: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
* Fix various typos and spelling mistakes in code commentsDavid Rowley2022-04-11
| | | | | Author: Justin Pryzby Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
* Revert "Logical decoding of sequences"Tomas Vondra2022-04-07
| | | | | | | | | | | | | | | | | | | | | This reverts a sequence of commits, implementing features related to logical decoding and replication of sequences: - 0da92dc530c9251735fc70b20cd004d9630a1266 - 80901b32913ffa59bf157a4d88284b2b3a7511d9 - b779d7d8fdae088d70da5ed9fcd8205035676df3 - d5ed9da41d96988d905b49bebb273a9b2d6e2915 - a180c2b34de0989269fdb819bff241a249bf5380 - 75b1521dae1ff1fde17fda2e30e591f2e5d64b6a - 2d2232933b02d9396113662e44dca5f120d6830e - 002c9dd97a0c874fd1693a570383e2dd38cd40d5 - 05843b1aa49df2ecc9b97c693b755bd1b6f856a9 The implementation has issues, mostly due to combining transactional and non-transactional behavior of sequences. It's not clear how this could be fixed, but it'll require reworking significant part of the patch. Discussion: https://postgr.es/m/95345a19-d508-63d1-860a-f5c2f41e8d40@enterprisedb.com
* pgstat: store statistics in shared memory.Andres Freund2022-04-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously the statistics collector received statistics updates via UDP and shared statistics data by writing them out to temporary files regularly. These files can reach tens of megabytes and are written out up to twice a second. This has repeatedly prevented us from adding additional useful statistics. Now statistics are stored in shared memory. Statistics for variable-numbered objects are stored in a dshash hashtable (backed by dynamic shared memory). Fixed-numbered stats are stored in plain shared memory. The header for pgstat.c contains an overview of the architecture. The stats collector is not needed anymore, remove it. By utilizing the transactional statistics drop infrastructure introduced in a prior commit statistics entries cannot "leak" anymore. Previously leaked statistics were dropped by pgstat_vacuum_stat(), called from [auto-]vacuum. On systems with many small relations pgstat_vacuum_stat() could be quite expensive. Now that replicas drop statistics entries for dropped objects, it is not necessary anymore to reset stats when starting from a cleanly shut down replica. Subsequent commits will perform some further code cleanup, adapt docs and add tests. Bumps PGSTAT_FILE_FORMAT_ID. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Andres Freund <andres@anarazel.de> Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: Justin Pryzby <pryzby@telsasoft.com> Reviewed-By: "David G. Johnston" <david.g.johnston@gmail.com> Reviewed-By: Tomas Vondra <tomas.vondra@2ndquadrant.com> (in a much earlier version) Reviewed-By: Arthur Zakirov <a.zakirov@postgrespro.ru> (in a much earlier version) Reviewed-By: Antonin Houska <ah@cybertec.at> (in a much earlier version) Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de Discussion: https://postgr.es/m/20220308205351.2xcn6k4x5yivcxyd@alap3.anarazel.de Discussion: https://postgr.es/m/20210319235115.y3wz7hpnnrshdyv6@alap3.anarazel.de
* Allow granting SET and ALTER SYSTEM privileges on GUC parameters.Tom Lane2022-04-06
| | | | | | | | | | | | | | | | | | | | This patch allows "PGC_SUSET" parameters to be set by non-superusers if they have been explicitly granted the privilege to do so. The privilege to perform ALTER SYSTEM SET/RESET on a specific parameter can also be granted. Such privileges are cluster-wide, not per database. They are tracked in a new shared catalog, pg_parameter_acl. Granting and revoking these new privileges works as one would expect. One caveat is that PGC_USERSET GUCs are unaffected by the SET privilege --- one could wish that those were handled by a revocable grant to PUBLIC, but they are not, because we couldn't make it robust enough for GUCs defined by extensions. Mark Dilger, reviewed at various times by Andrew Dunstan, Robert Haas, Joshua Brindle, and myself Discussion: https://postgr.es/m/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com
* Add new block-by-block strategy for CREATE DATABASE.Robert Haas2022-03-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because this strategy logs changes on a block-by-block basis, it avoids the need to checkpoint before and after the operation. However, because it logs each changed block individually, it might generate a lot of extra write-ahead logging if the template database is large. Therefore, the older strategy remains available via a new STRATEGY parameter to CREATE DATABASE, and a corresponding --strategy option to createdb. Somewhat controversially, this patch assembles the list of relations to be copied to the new database by reading the pg_class relation of the template database. Cross-database access like this isn't normally possible, but it can be made to work here because there can't be any connections to the database being copied, nor can it contain any in-doubt transactions. Even so, we have to use lower-level interfaces than normal, since the table scan and relcache interfaces will not work for a database to which we're not connected. The advantage of this approach is that we do not need to rely on the filesystem to determine what ought to be copied, but instead on PostgreSQL's own knowledge of the database structure. This avoids, for example, copying stray files that happen to be located in the source database directory. Dilip Kumar, with a fairly large number of cosmetic changes by me. Reviewed and tested by Ashutosh Sharma, Andres Freund, John Naylor, Greg Nancarrow, Neha Sharma. Additional feedback from Bruce Momjian, Heikki Linnakangas, Julien Rouhaud, Adam Brusselback, Kyotaro Horiguchi, Tomas Vondra, Andrew Dunstan, Álvaro Herrera, and others. Discussion: http://postgr.es/m/CA+TgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T=CYCvRyFHywSBUQ@mail.gmail.com
* Allow specifying column lists for logical replicationTomas Vondra2022-03-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows specifying an optional column list when adding a table to logical replication. The column list may be specified after the table name, enclosed in parentheses. Columns not included in this list are not sent to the subscriber, allowing the schema on the subscriber to be a subset of the publisher schema. For UPDATE/DELETE publications, the column list needs to cover all REPLICA IDENTITY columns. For INSERT publications, the column list is arbitrary and may omit some REPLICA IDENTITY columns. Furthermore, if the table uses REPLICA IDENTITY FULL, column list is not allowed. The column list can contain only simple column references. Complex expressions, function calls etc. are not allowed. This restriction could be relaxed in the future. During the initial table synchronization, only columns included in the column list are copied to the subscriber. If the subscription has several publications, containing the same table with different column lists, columns specified in any of the lists will be copied. This means all columns are replicated if the table has no column list at all (which is treated as column list with all columns), or when of the publications is defined as FOR ALL TABLES (possibly IN SCHEMA that matches the schema of the table). For partitioned tables, publish_via_partition_root determines whether the column list for the root or the leaf relation will be used. If the parameter is 'false' (the default), the list defined for the leaf relation is used. Otherwise, the column list for the root partition will be used. Psql commands \dRp+ and \d <table-name> now display any column lists. Author: Tomas Vondra, Alvaro Herrera, Rahila Syed Reviewed-by: Peter Eisentraut, Alvaro Herrera, Vignesh C, Ibrar Ahmed, Amit Kapila, Hou zj, Peter Smith, Wang wei, Tang, Shi yu Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
* Add decoding of sequences to built-in replicationTomas Vondra2022-03-24
| | | | | | | | | | | | | | | | | | | | | | | | | This commit adds support for decoding of sequences to the built-in replication (the infrastructure was added by commit 0da92dc530). The syntax and behavior mostly mimics handling of tables, i.e. a publication may be defined as FOR ALL SEQUENCES (replicating all sequences in a database), FOR ALL SEQUENCES IN SCHEMA (replicating all sequences in a particular schema) or individual sequences. To publish sequence modifications, the publication has to include 'sequence' action. The protocol is extended with a new message, describing sequence increments. A new system view pg_publication_sequences lists all the sequences added to a publication, both directly and indirectly. Various psql commands (\d and \dRp) are improved to also display publications including a given sequence, or sequences included in a publication. Author: Tomas Vondra, Cary Huang Reviewed-by: Peter Eisentraut, Amit Kapila, Hannu Krosing, Andres Freund, Petr Jelinek Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
* Don't call fwrite() with len == 0 when writing out relcache init file.Andres Freund2022-03-23
| | | | | | | | | Noticed via -fsanitize=undefined. Backpatch to all branches, for the same reasons as 46ab07ffda9. Discussion: https://postgr.es/m/20220323173537.ll7klrglnp4gn2um@alap3.anarazel.de Backpatch: 10-
* Add support for security invoker views.Dean Rasheed2022-03-22
| | | | | | | | | | | | | | | | | | | | A security invoker view checks permissions for accessing its underlying base relations using the privileges of the user of the view, rather than the privileges of the view owner. Additionally, if any of the base relations are tables with RLS enabled, the policies of the user of the view are applied, rather than those of the view owner. This allows views to be defined without giving away additional privileges on the underlying base relations, and matches a similar feature available in other database systems. It also allows views to operate more naturally with RLS, without affecting the assignments of policies to users. Christoph Heiss, with some additional hacking by me. Reviewed by Laurenz Albe and Wolfgang Walther. Discussion: https://postgr.es/m/b66dd6d6-ad3e-c6f2-8b90-47be773da240%40cybertec.at
* Refactor code for reading and writing relation map files.Robert Haas2022-03-17
| | | | | | | | | | | | | | | | | | | | | | | | | | Restructure things so that the functions which update the global variables shared_map and local_map are separate from the functions which just read and write relation map files without touching any global variables. In the new structure of things, write_relmap_file() writes a relmap file but no longer performs global variable updates. A symmetric function read_relmap_file() that just reads a file without changing any global variables is added, and load_relmap_file(), which does change the global variables, uses it as a subroutine. Because write_relmap_file() no longer updates shared_map and local_map, that logic is moved to perform_relmap_update(). However, no similar logic is added to relmap_redo() even though it also calls write_relmap_file(). That's because recovery must not rely on the contents of the relation map, and therefore there is no need to initialize it. In fact, doing so seems like a mistake, because we might then manage to rely on the in-memory map where we shouldn't. Patch by me, based on earlier work by Dilip Kumar. Reviewed by Ashutosh Sharma. Discussion: http://postgr.es/m/CA+TgmobQLgrt4AXsc0ru7aFFkzv=9fS-Q_yO69=k9WY67RCctg@mail.gmail.com
* Restore the previous semantics of get_constraint_index().Tom Lane2022-03-11
| | | | | | | | | | | | | | | | | | | | | | | | Commit 8b069ef5d changed this function to look at pg_constraint.conindid rather than searching pg_depend. That was a good performance improvement, but it failed to preserve the exact semantics. The old code would only return an index that was "owned by" (internally dependent on) the specified constraint, whereas the new code will also return indexes that are just referenced by foreign key constraints. This confuses ALTER TABLE, which was implicitly expecting the previous semantics, into failing with errors like ERROR: relation 146621 has multiple clustered indexes or ERROR: "pk_attbl" is not an index for table "atref" We can fix this without reverting the performance improvement by adding a contype check in get_constraint_index(). Another way could be to make ALTER TABLE check it, but I'm worried that extension code could also have subtle dependencies on the old semantics. Tom Lane and Japin Li, per bug #17409 from Holly Roberts. Back-patch to v14 where the error crept in. Discussion: https://postgr.es/m/17409-52871dda8b5741cb@postgresql.org
* Parse/analyze function renamingPeter Eisentraut2022-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are three parallel ways to call parse/analyze: with fixed parameters, with variable parameters, and by supplying your own parser callback. Some of the involved functions were confusingly named and made this API structure more confusing. This patch renames some functions to make this clearer: parse_analyze() -> parse_analyze_fixedparams() pg_analyze_and_rewrite() -> pg_analyze_and_rewrite_fixedparams() (Otherwise one might think this variant doesn't accept parameters, but in fact all three ways accept parameters.) pg_analyze_and_rewrite_params() -> pg_analyze_and_rewrite_withcb() (Before, and also when considering pg_analyze_and_rewrite(), one might think this is the only way to pass parameters. Moreover, the parser callback doesn't necessarily need to parse only parameters, it's just one of the things it could do.) parse_fixed_parameters() -> setup_parse_fixed_parameters() parse_variable_parameters() -> setup_parse_variable_parameters() (These functions don't actually do any parsing, they just set up callbacks to use during parsing later.) This patch also adds some const decorations to the fixed-parameters API, so the distinction from the variable-parameters API is more clear. Reviewed-by: Nathan Bossart <bossartn@amazon.com> Discussion: https://www.postgresql.org/message-id/flat/c67ce276-52b4-0239-dc0e-39875bf81840@enterprisedb.com
* Allow specifying row filters for logical replication of tables.Amit Kapila2022-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This feature adds row filtering for publication tables. When a publication is defined or modified, an optional WHERE clause can be specified. Rows that don't satisfy this WHERE clause will be filtered out. This allows a set of tables to be partially replicated. The row filter is per table. A new row filter can be added simply by specifying a WHERE clause after the table name. The WHERE clause must be enclosed by parentheses. The row filter WHERE clause for a table added to a publication that publishes UPDATE and/or DELETE operations must contain only columns that are covered by REPLICA IDENTITY. The row filter WHERE clause for a table added to a publication that publishes INSERT can use any column. If the row filter evaluates to NULL, it is regarded as "false". The WHERE clause only allows simple expressions that don't have user-defined functions, user-defined operators, user-defined types, user-defined collations, non-immutable built-in functions, or references to system columns. These restrictions could be addressed in the future. If you choose to do the initial table synchronization, only data that satisfies the row filters is copied to the subscriber. If the subscription has several publications in which a table has been published with different WHERE clauses, rows that satisfy ANY of the expressions will be copied. If a subscriber is a pre-15 version, the initial table synchronization won't use row filters even if they are defined in the publisher. The row filters are applied before publishing the changes. If the subscription has several publications in which the same table has been published with different filters (for the same publish operation), those expressions get OR'ed together so that rows satisfying any of the expressions will be replicated. This means all the other filters become redundant if (a) one of the publications have no filter at all, (b) one of the publications was created using FOR ALL TABLES, (c) one of the publications was created using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema. If your publication contains a partitioned table, the publication parameter publish_via_partition_root determines if it uses the partition's row filter (if the parameter is false, the default) or the root partitioned table's row filter. Psql commands \dRp+ and \d <table-name> will display any row filters. Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
* Use bitwise rotate functions in more placesJohn Naylor2022-02-20
| | | | | | | | | | | | | There were a number of places in the code that used bespoke bit-twiddling expressions to do bitwise rotation. While we've had pg_rotate_right32() for a while now, we hadn't gotten around to standardizing on that. Do so now. Since many potential call sites look more natural with the "left" equivalent, add that function too. Reviewed by Tom Lane and Yugo Nagata Discussion: https://www.postgresql.org/message-id/CAFBsxsH7c1LC0CGZ0ADCBXLHU5-%3DKNXx-r7tHYPAW51b2HK4Qw%40mail.gmail.com
* Add UNIQUE null treatment optionPeter Eisentraut2022-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | The SQL standard has been ambiguous about whether null values in unique constraints should be considered equal or not. Different implementations have different behaviors. In the SQL:202x draft, this has been formalized by making this implementation-defined and adding an option on unique constraint definitions UNIQUE [ NULLS [NOT] DISTINCT ] to choose a behavior explicitly. This patch adds this option to PostgreSQL. The default behavior remains UNIQUE NULLS DISTINCT. Making this happen in the btree code is pretty easy; most of the patch is just to carry the flag around to all the places that need it. The CREATE UNIQUE INDEX syntax extension is not from the standard, it's my own invention. I named all the internal flags, catalog columns, etc. in the negative ("nulls not distinct") so that the default PostgreSQL behavior is the default if the flag is false. Reviewed-by: Maxim Orlov <orlovmg@gmail.com> Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
* Remove xloginsert.h from xlog.hAlvaro Herrera2022-01-30
| | | | | | | | | xlog.h is directly and indirectly #included in a lot of places. With this change, xloginsert.h is no longer unnecessarily included in the large number of them that don't need it. Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://postgr.es/m/CALj2ACVe-W+WM5P44N7eG9C2_FmaeM8Dq5aCnD3fHt0Ba=WR6w@mail.gmail.com
* Add stxdinherit flag to pg_statistic_ext_dataTomas Vondra2022-01-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add pg_statistic_ext_data.stxdinherit flag, so that for each extended statistics definition we can store two versions of data - one for the relation alone, one for the whole inheritance tree. This is analogous to pg_statistic.stainherit, but we failed to include such flag in catalogs for extended statistics, and we had to work around it (see commits 859b3003de, 36c4bc6e72 and 20b9fa308e). This changes the relationship between the two catalogs storing extended statistics objects (pg_statistic_ext and pg_statistic_ext_data). Until now, there was a simple 1:1 mapping - for each definition there was one pg_statistic_ext_data row, and this row was inserted while creating the statistics (and then updated during ANALYZE). With the stxdinherit flag, we don't know how many rows there will be (child relations may be added after the statistics object is defined), so there may be up to two rows. We could make CREATE STATISTICS to always create both rows, but that seems wasteful - without partitioning we only need stxdinherit=false rows, and declaratively partitioned tables need only stxdinherit=true. So we no longer initialize pg_statistic_ext_data in CREATE STATISTICS, and instead make that a responsibility of ANALYZE. Which is what we do for regular statistics too. Patch by me, with extensive improvements and fixes by Justin Pryzby. Author: Tomas Vondra, Justin Pryzby Reviewed-by: Tomas Vondra, Justin Pryzby Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Always use ReleaseTupleDesc after lookup_rowtype_tupdesc et al.Tom Lane2021-12-15
| | | | | | | | | | | | | | | | | | | | The API spec for lookup_rowtype_tupdesc previously said you could use either ReleaseTupleDesc or DecrTupleDescRefCount. However, the latter choice means the caller must be certain that the returned tupdesc is refcounted. I don't recall right now whether that was always true when this spec was written, but it's certainly not always true since we introduced shared record typcaches for parallel workers. That means that callers using DecrTupleDescRefCount are dependent on typcache behavior details that they probably shouldn't be. Hence, change the API spec to say that you must call ReleaseTupleDesc, and fix the half-dozen callers that weren't. AFAICT this is just future-proofing, there's no live bug here. So no back-patch. Per gripe from Chapman Flack. Discussion: https://postgr.es/m/61B901A4.1050808@anastigmatix.net
* Allow specifying column list for foreign key ON DELETE SET actionsPeter Eisentraut2021-12-08
| | | | | | | | | | | | | | | | | | | | | Extend the foreign key ON DELETE actions SET NULL and SET DEFAULT by allowing the specification of a column list, like CREATE TABLE posts ( ... FOREIGN KEY (tenant_id, author_id) REFERENCES users ON DELETE SET NULL (author_id) ); If a column list is specified, only those columns are set to null/default, instead of all the columns in the foreign-key constraint. This is useful for multitenant or sharded schemas, where the tenant or shard ID is included in the primary key of all tables but shouldn't be set to null. Author: Paul Martinez <paulmtz@google.com> Discussion: https://www.postgresql.org/message-id/flat/CACqFVBZQyMYJV=njbSMxf+rbDHpx=W=B7AEaMKn8dWn9OZJY7w@mail.gmail.com
* Some RELKIND macro refactoringPeter Eisentraut2021-12-03
| | | | | | | | | | | | Add more macros to group some RELKIND_* macros: - RELKIND_HAS_PARTITIONS() - RELKIND_HAS_TABLESPACE() - RELKIND_HAS_TABLE_AM() Reviewed-by: Michael Paquier <michael@paquier.xyz> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://www.postgresql.org/message-id/flat/a574c8f1-9c84-93ad-a9e5-65233d6fc00f%40enterprisedb.com
* Ignore BRIN indexes when checking for HOT udpatesTomas Vondra2021-11-30
| | | | | | | | | | | | | | | | When determining whether an index update may be skipped by using HOT, we can ignore attributes indexed only by BRIN indexes. There are no index pointers to individual tuples in BRIN, and the page range summary will be updated anyway as it relies on visibility info. This also removes rd_indexattr list, and replaces it with rd_attrsvalid flag. The list was not used anywhere, and a simple flag is sufficient. Patch by Josef Simanek, various fixes and improvements by me. Author: Josef Simanek Reviewed-by: Tomas Vondra, Alvaro Herrera Discussion: https://postgr.es/m/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com
* Remove PF_USED_FOR_ASSERTS_ONLY from variables in general useDaniel Gustafsson2021-11-30
| | | | | | | | | | | | | | | | fsstate in process_pending_requests (in postgres_fdw.c) was added in 8998e3cafa2 as an assertion-only variable, 1ec7fca8592 stated using the variable outside of assertions. rd_index in get_index_column_opclass (in lsyscache.c) was introduced in 2a6368343ff, and then promptly used in the fix commit 7e041603904 shortly thereafter. This removes the PG_USED_FOR_ASSERTS_ONLY variable decoration from the above mentioned variables. Reviewed-by: Greg Nancarrow <gregn4422@gmail.com> Discussion: https://postgr.es/m/F959106C-0F21-43A5-B2AE-D007D51ACBEE@yesql.se
* Allow publishing the tables of schema.Amit Kapila2021-10-27
| | | | | | | | | | | | | | | | | | | | | | | | | | A new option "FOR ALL TABLES IN SCHEMA" in Create/Alter Publication allows one or more schemas to be specified, whose tables are selected by the publisher for sending the data to the subscriber. The new syntax allows specifying both the tables and schemas. For example: CREATE PUBLICATION pub1 FOR TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2; OR ALTER PUBLICATION pub1 ADD TABLE t1,t2,t3, ALL TABLES IN SCHEMA s1,s2; A new system table "pg_publication_namespace" has been added, to maintain the schemas that the user wants to publish through the publication. Modified the output plugin (pgoutput) to publish the changes if the relation is part of schema publication. Updates pg_dump to identify and dump schema publications. Updates the \d family of commands to display schema publications and \dRp+ variant will now display associated schemas if any. Author: Vignesh C, Hou Zhijie, Amit Kapila Syntax-Suggested-by: Tom Lane, Alvaro Herrera Reviewed-by: Greg Nancarrow, Masahiko Sawada, Hou Zhijie, Amit Kapila, Haiying Tang, Ajin Cherian, Rahila Syed, Bharath Rupireddy, Mark Dilger Tested-by: Haiying Tang Discussion: https://www.postgresql.org/message-id/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ@mail.gmail.com
* Fix CREATE INDEX CONCURRENTLY for the newest prepared transactions.Noah Misch2021-10-23
| | | | | | | | | | | | | | | | | | | The purpose of commit 8a54e12a38d1545d249f1402f66c8cde2837d97c was to fix this, and it sufficed when the PREPARE TRANSACTION completed before the CIC looked for lock conflicts. Otherwise, things still broke. As before, in a cluster having used CIC while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. It may be necessary to reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices. Fix this for future index builds by making CIC wait for arbitrarily-recent prepared transactions and for ordinary transactions that may yet PREPARE TRANSACTION. As part of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC before it calls ProcArrayClearTransaction(). Back-patch to 9.6 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Andres Freund. Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
* Avoid race in RelationBuildDesc() affecting CREATE INDEX CONCURRENTLY.Noah Misch2021-10-23
| | | | | | | | | | | | | | | | | CIC and REINDEX CONCURRENTLY assume backends see their catalog changes no later than each backend's next transaction start. That failed to hold when a backend absorbed a relevant invalidation in the middle of running RelationBuildDesc() on the CIC index. Queries that use the resulting index can silently fail to find rows. Fix this for future index builds by making RelationBuildDesc() loop until it finishes without accepting a relevant invalidation. It may be necessary to reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices. Back-patch to 9.6 (all supported versions). Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres Freund. Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
* Disable anonymous record hash support except in special casesPeter Eisentraut2021-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 01e658fa74 added hash support for row types. This also added support for hashing anonymous record types, using the same approach that the type cache uses for comparison support for record types: It just reports that it works, but it might fail at run time if a component type doesn't actually support the operation. We get away with that for comparison because most types support that. But some types don't support hashing, so the current state can result in failures at run time where the planner chooses hashing over sorting, whereas that previously worked if only sorting was an option. We do, however, want the record hashing support for path tracking in recursive unions, and the SEARCH and CYCLE clauses built on that. In that case, hashing is the only plan option. So enable that, this commit implements the following approach: The type cache does not report that hashing is available for the record type. This undoes that part of 01e658fa74. Instead, callers that require hashing no matter what can override that result themselves. This patch only touches the callers to make the aforementioned recursive query cases work, namely the parse analysis of unions, as well as the hash_array() function. Reported-by: Sait Talha Nisanci <sait.nisanci@microsoft.com> Bug: #17158 Discussion: https://www.postgresql.org/message-id/flat/17158-8a2ba823982537a4%40postgresql.org
* Reduce memory consumption for pending invalidation messages.Tom Lane2021-08-16
| | | | | | | | | | | | | | | | The existing data structures in inval.c are fairly inefficient for the common case of a command or subtransaction that registers a small number of cache invalidation events. While this doesn't matter if we commit right away, it can build up to a lot of bloat in a transaction that contains many DDL operations. By making a few more assumptions about the expected use-case, we can switch to a representation using densely-packed arrays. Although this eliminates some data-copying, it doesn't seem to make much difference time-wise. But the space consumption decreases substantially. Patch by me; thanks to Nathan Bossart for review. Discussion: https://postgr.es/m/2380555.1622395376@sss.pgh.pa.us
* Use the "pg_temp" schema alias in EXPLAIN and related output.Tom Lane2021-07-27
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch causes EXPLAIN output to refer to objects that are in the current session's temp schema with the "pg_temp" schema alias rather than that schema's actual name. This is useful for our own testing purposes since it will stabilize EXPLAIN VERBOSE output for such cases, allowing us to use that in regression tests. It should be less confusing for end users too. Since ruleutils.c needs to change behavior for this, the change also leaks into a few other users of ruleutils.c, for example pg_get_viewdef(). AFAICS that won't cause any problems. We did find that aggressively trying to change this behavior across-the-board would cause issues, but as long as "pg_temp" only appears within generated SQL text, I think it'll be fine. Along the way, make get_namespace_name_or_temp conform to the same API as get_namespace_name, ie that it returns a palloc'd string or NULL. The current behavior hasn't caused any bugs since no callers attempt to pfree the result, but if it gets more widespread usage that could become a problem. Amul Sul, reviewed and extended by me Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
* Remove unused function parameter in get_qual_from_partboundJohn Naylor2021-07-14
| | | | | | | | | | | | | | Commit 0563a3a8b changed how partition constraints were generated such that this function no longer computes the mapping of parent attnos to child attnos. This is an external function that extensions could use, so this is potentially a breaking change. No external callers are known, however, and this will make it simpler to write such callers in the future. Author: Hou Zhijie Reviewed-by: David Rowley, Michael Paquier, Soumyadeep Chakraborty Discussion: https://www.postgresql.org/message-id/flat/OS0PR01MB5716A75A45BE46101A1B489894379@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Rename debug_invalidate_system_caches_always to debug_discard_caches.Tom Lane2021-07-13
| | | | | | | | The name introduced by commit 4656e3d66 was agreed to be unreasonably long. To match this change, rename initdb's recently-added --clobber-cache option to --discard-caches. Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
* Fix assign_record_type_typmod().Jeff Davis2021-07-10
| | | | | | | | | | | | | | | | | | If an error occurred in the wrong place, it was possible to leave an unintialized entry in the hash table, leading to a crash. Fixed. Also, be more careful about the order of operations so that an allocation error doesn't leak memory in CacheMemoryContext or unnecessarily advance NextRecordTypmod. Backpatch through version 11. Earlier versions (prior to 35ea75632a5) do not exhibit the problem, because an uninitialized hash entry contains a valid empty list. Author: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> Reviewed-by: Andres Freund Discussion: https://postgr.es/m/HE1PR8303MB009069D476225B9A9E194B8891779@HE1PR8303MB0090.EURPRD83.prod.outlook.com Backpatch-through: 11
* Reduce overhead of cache-clobber testing in LookupOpclassInfo().Tom Lane2021-07-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 03ffc4d6d added logic to bypass all caching behavior in LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled. It doesn't look like I stopped to think much about what that would cost, but recent investigation shows that the cost is enormous: it roughly doubles the time needed for cache-clobber test runs. There does seem to be value in this behavior when trying to test the opclass-cache loading logic itself, but for other purposes the cost is excessive. Hence, let's back off to doing this only when debug_invalidate_system_caches_always is at least 3; or in older branches, when CLOBBER_CACHE_RECURSIVELY is defined. While here, clean up some other minor issues in LookupOpclassInfo. Re-order the code so we aren't left with broken cache entries (leading to later core dumps) in the unlikely case that we suffer OOM while trying to allocate space for a new entry. (That seems to be my oversight in 03ffc4d6d.) Also, in >= v13, stop allocating one array entry too many. That's evidently left over from sloppy reversion in 851b14b0c. Back-patch to all supported branches, mainly to reduce the runtime of cache-clobbering buildfarm animals. Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
* Improve various places that double the size of a bufferDavid Rowley2021-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several places were performing a tight loop to determine the first power of 2 number that's > or >= the required memory. Instead of using a loop for that, we can use pg_nextpower2_32 or pg_nextpower2_64. When we need a power of 2 number equal to or greater than a given amount, we just pass the amount to the nextpower2 function. When we need a power of 2 greater than the amount, we just pass the amount + 1. Additionally, in tsearch there were a couple of locations that were performing a while loop when a simple "if" would have done. In both of these locations only 1 item is being added, so the loop could only have ever iterated once. Changing the loop into an if statement makes the code very slightly more optimal as the condition is checked once rather than twice. There are quite a few remaining locations that increase the size of the buffer in the following form: while (reqsize >= buflen) { buflen *= 2; buf = repalloc(buf, buflen); } These are not touched in this commit. repalloc will error out for sizes larger than MaxAllocSize. Changing these to use pg_nextpower2_32 would remove the chance of that error being raised. It's unclear from the code if the sizes could ever become that large, so err on the side of caution. Discussion: https://postgr.es/m/CAApHDvp=tns7RL4PH0ZR0M+M-YFLquK7218x=0B_zO+DbOma+w@mail.gmail.com Reviewed-by: Zhihong Yu
* Add index OID macro argument to DECLARE_INDEXPeter Eisentraut2021-06-29
| | | | | | | | | Instead of defining symbols such as AmOidIndexId explicitly, include them as an argument of DECLARE_INDEX() and have genbki.pl generate the way as the table OID symbols from the CATALOG() declaration. Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/ccef1e46-a404-25b1-9b4c-85f2c08e1f28%40enterprisedb.com