aboutsummaryrefslogtreecommitdiff
path: root/src/backend/commands
Commit message (Collapse)AuthorAge
* Print planning time only in EXPLAIN ANALYZE, not plain EXPLAIN.Tom Lane2014-10-15
| | | | | | | | | | | | We've gotten enough push-back on that change to make it clear that it wasn't an especially good idea to do it like that. Revert plain EXPLAIN to its previous behavior, but keep the extra output in EXPLAIN ANALYZE. Per discussion. Internally, I set this up as a separate flag ExplainState.summary that controls printing of planning time and execution time. For now it's just copied from the ANALYZE option, but we could consider exposing it to users.
* Increase number of hash join buckets for underestimate.Kevin Grittner2014-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | If we expect batching at the very beginning, we size nbuckets for "full work_mem" (see how many tuples we can get into work_mem, while not breaking NTUP_PER_BUCKET threshold). If we expect to be fine without batching, we start with the 'right' nbuckets and track the optimal nbuckets as we go (without actually resizing the hash table). Once we hit work_mem (considering the optimal nbuckets value), we keep the value. At the end of the first batch, we check whether (nbuckets != nbuckets_optimal) and resize the hash table if needed. Also, we keep this value for all batches (it's OK because it assumes full work_mem, and it makes the batchno evaluation trivial). So the resize happens only once. There could be cases where it would improve performance to allow the NTUP_PER_BUCKET threshold to be exceeded to keep everything in one batch rather than spilling to a second batch, but attempts to generate such a case have so far been unsuccessful; that issue may be addressed with a follow-on patch after further investigation. Tomas Vondra with minor format and comment cleanup by me Reviewed by Robert Haas, Heikki Linnakangas, and Kevin Grittner
* Message improvementsPeter Eisentraut2014-10-12
|
* Split builtins.h to a new header ruleutils.hAlvaro Herrera2014-10-08
| | | | | | | The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
* Implement SKIP LOCKED for row-level locksAlvaro Herrera2014-10-07
| | | | | | | | | | | | | | | | | | This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
* Fix typo in elog message.Robert Haas2014-10-07
|
* Fix CreatePolicy, pg_dump -v; psql and doc updatesStephen Frost2014-10-03
| | | | | | | | | | | | | | | | | | | | Peter G pointed out that valgrind was, rightfully, complaining about CreatePolicy() ending up copying beyond the end of the parsed policy name. Name is a fixed-size type and we need to use namein (through DirectFunctionCall1()) to flush out the entire array before we pass it down to heap_form_tuple. Michael Paquier pointed out that pg_dump --verbose was missing a newline and Fabrízio de Royes Mello further pointed out that the schema was also missing from the messages, so fix those also. Also, based on an off-list comment from Kevin, rework the psql \d output to facilitate copy/pasting into a new CREATE or ALTER POLICY command. Lastly, improve the pg_policies view and update the documentation for it, along with a few other minor doc corrections based on an off-list discussion with Adam Brightwell.
* Copy-editing of row securityStephen Frost2014-09-24
| | | | | | Address a few typos in the row security update, pointed out off-list by Adam Brightwell. Also include 'ALL' in the list of commands supported, for completeness.
* Code review for row security.Stephen Frost2014-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Buildfarm member tick identified an issue where the policies in the relcache for a relation were were being replaced underneath a running query, leading to segfaults while processing the policies to be added to a query. Similar to how TupleDesc RuleLocks are handled, add in a equalRSDesc() function to check if the policies have actually changed and, if not, swap back the rsdesc field (using the original instead of the temporairly built one; the whole structure is swapped and then specific fields swapped back). This now passes a CLOBBER_CACHE_ALWAYS for me and should resolve the buildfarm error. In addition to addressing this, add a new chapter in Data Definition under Privileges which explains row security and provides examples of its usage, change \d to always list policies (even if row security is disabled- but note that it is disabled, or enabled with no policies), rework check_role_for_policy (it really didn't need the entire policy, but it did need to be using has_privs_of_role()), and change the field in pg_class to relrowsecurity from relhasrowsecurity, based on Heikki's suggestion. Also from Heikki, only issue SET ROW_SECURITY in pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and document --enable-row-security options for pg_dump and pg_restore. Lastly, fix a number of minor whitespace and typo issues from Heikki, Dimitri, add a missing #include, per Peter E, fix a few minor variable-assigned-but-not-used and resource leak issues from Coverity and add tab completion for role attribute bypassrls as well.
* Fix compiler warning.Robert Haas2014-09-22
| | | | It is meaningless to declare a pass-by-value return type const.
* Row-Level Security Policies (RLS)Stephen Frost2014-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
* improve hash creation warning messageBruce Momjian2014-09-11
| | | | | | This improves the wording of commit 84aa8ba128a08e6fdebb2497c7a79ebf18093e12. Report by Kevin Grittner
* Issue a warning during the creation of hash indexesBruce Momjian2014-09-10
|
* Assorted message fixes and improvementsPeter Eisentraut2014-09-05
|
* Issue clearer notice when inherited merged columns are movedBruce Momjian2014-09-03
| | | | | | | CREATE TABLE INHERIT moves user-specified columns to the location of the inherited column. Report by Fatal Majid
* Assorted message improvementsPeter Eisentraut2014-08-29
|
* Fix superuser concurrent refresh of matview owned by another.Kevin Grittner2014-08-26
| | | | | | | | | | | Use SECURITY_LOCAL_USERID_CHANGE while building temporary tables; only escalate to SECURITY_RESTRICTED_OPERATION while potentially running user-supplied code. The more secure mode was preventing temp table creation. Add regression tests to cover this problem. This fixes Bug #11208 reported by Bruno Emanuel de Andrade Silva. Backpatch to 9.4, where the bug was introduced.
* Implement IF NOT EXISTS for CREATE SEQUENCE.Heikki Linnakangas2014-08-26
| | | | Fabrízio de Royes Mello
* pg_upgrade: prevent automatic oid assignmentBruce Momjian2014-08-25
| | | | | | | | | | Prevent automatic oid assignment when in binary upgrade mode. Also throw an error when contrib/pg_upgrade_support functions are called when not in binary upgrade mode. This prevent automatically-assigned oids from conflicting with later pre-assigned oids coming from the old cluster. It also makes sure oids are preserved in call important cases.
* DefineType: return base type OID, not its arrayAlvaro Herrera2014-08-25
| | | | | | Event triggers want to know the OID of the interesting object created, which is the main type. The array created as part of the operation is just a subsidiary object which is not of much interest.
* Have CREATE TABLE AS and REFRESH return an OIDAlvaro Herrera2014-08-25
| | | | | | Other DDL commands are already returning the OID, which is required for future additional event trigger work. This is merely making these commands in line with the rest of utility command support.
* Oops, forgot to "git add" one last changeAlvaro Herrera2014-08-25
|
* Editorial review of SET UNLOGGEDAlvaro Herrera2014-08-25
| | | | | | | | Add a succint comment explaining why it's correct to change the persistence in this way. Also s/loggedness/persistence/ because native speakers didn't like the latter term. Fabrízio and Álvaro
* Implement ALTER TABLE .. SET LOGGED / UNLOGGEDAlvaro Herrera2014-08-22
| | | | | | | | | | | | This enables changing permanent (logged) tables to unlogged and vice-versa. (Docs for ALTER TABLE / SET TABLESPACE got shuffled in an order that hopefully makes more sense than the original.) Author: Fabrízio de Royes Mello Reviewed by: Christoph Berg, Andres Freund, Thom Brown Some tweaking by Álvaro Herrera
* Rework 'MOVE ALL' to 'ALTER .. ALL IN TABLESPACE'Stephen Frost2014-08-21
| | | | | | | | | | | | As 'ALTER TABLESPACE .. MOVE ALL' really didn't change the tablespace but instead changed objects inside tablespaces, it made sense to rework the syntax and supporting functions to operate under the 'ALTER (TABLE|INDEX|MATERIALIZED VIEW)' syntax and to be in tablecmds.c. Pointed out by Alvaro, who also suggested the new syntax. Back-patch to 9.4.
* Reject duplicate column names in foreign key referenced-columns lists.Tom Lane2014-08-09
| | | | | | | | | | | | | Such cases are disallowed by the SQL spec, and even if we wanted to allow them, the semantics seem ambiguous: how should the FK columns be matched up with the columns of a unique index? (The matching could be significant in the presence of opclasses with different notions of equality, so this issue isn't just academic.) However, our code did not previously reject such cases, but instead would either fail to match to any unique index, or generate a bizarre opclass-lookup error because of sloppy thinking in the index-matching code. David Rowley
* Move log_newpage and log_newpage_buffer to xlog.c.Heikki Linnakangas2014-07-31
| | | | | | | | | | | log_newpage is used by many indexams, in addition to heap, but for historical reasons it's always been part of the heapam rmgr. Starting with 9.3, we have another WAL record type for logging an image of a page, XLOG_FPI. Simplify things by moving log_newpage and log_newpage_buffer to xlog.c, and switch to using the XLOG_FPI record type. Bump the WAL version number because the code to replay the old HEAP_NEWPAGE records is removed.
* Reject out-of-range numeric timezone specifications.Tom Lane2014-07-21
| | | | | | | | | | | | | | | In commit 631dc390f49909a5c8ebd6002cfb2bcee5415a9d, we started to handle simple numeric timezone offsets via the zic library instead of the old CTimeZone/HasCTZSet kluge. However, we overlooked the fact that the zic code will reject UTC offsets exceeding a week (which seems a bit arbitrary, but not because it's too tight ...). This led to possibly setting session_timezone to NULL, which results in crashes in most timezone-related operations as of 9.4, and crashes in a small number of places even before that. So check for NULL return from pg_tzset_offset() and report an appropriate error message. Per bug #11014 from Duncan Gillis. Back-patch to all supported branches, like the previous patch. (Unfortunately, as of today that no longer includes 8.4.)
* Adjust cutoff points in newly-added sanity tests.Tom Lane2014-07-21
| | | | Per recommendation from Andres.
* Defend against bad relfrozenxid/relminmxid/datfrozenxid/datminmxid values.Tom Lane2014-07-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit a61daa14d56867e90dc011bbba52ef771cea6770, we fixed pg_upgrade so that it would install sane relminmxid and datminmxid values, but that does not cure the problem for installations that were already pg_upgraded to 9.3; they'll initially have "1" in those fields. This is not a big problem so long as 1 is "in the past" compared to the current nextMultiXact counter. But if an installation were more than halfway to the MXID wrap point at the time of upgrade, 1 would appear to be "in the future" and that would effectively disable tracking of oldest MXIDs in those tables/databases, until such time as the counter wrapped around. While in itself this isn't worse than the situation pre-9.3, where we did not manage MXID wraparound risk at all, the consequences of premature truncation of pg_multixact are worse now; so we ought to make some effort to cope with this. We discussed advising users to fix the tracking values manually, but that seems both very tedious and very error-prone. Instead, this patch adopts two amelioration rules. First, a relminmxid value that is "in the future" is allowed to be overwritten with a full-table VACUUM's actual freeze cutoff, ignoring the normal rule that relminmxid should never go backwards. (This essentially assumes that we have enough defenses in place that wraparound can never occur anymore, and thus that a value "in the future" must be corrupt.) Second, if we see any "in the future" values then we refrain from truncating pg_clog and pg_multixact. This prevents loss of clog data until we have cleaned up all the broken tracking data. In the worst case that could result in considerable clog bloat, but in practice we expect that relfrozenxid-driven freezing will happen soon enough to fix the problem before clog bloat becomes intolerable. (Users could do manual VACUUM FREEZEs if not.) Note that this mechanism cannot save us if there are already-wrapped or already-truncated-away MXIDs in the table; it's only capable of dealing with corrupt tracking values. But that's the situation we have with the pg_upgrade bug. For consistency, apply the same rules to relfrozenxid/datfrozenxid. There are not known mechanisms for these to get messed up, but if they were, the same tactics seem appropriate for fixing them.
* Move view reloptions into their own varlena structAlvaro Herrera2014-07-14
| | | | | | | Per discussion after a gripe from me in http://www.postgresql.org/message-id/20140611194633.GH18688@eldon.alvh.no-ip.org Jaime Casanova
* Prevent bitmap heap scans from showing unnecessary block info in EXPLAIN ↵Fujii Masao2014-07-14
| | | | | | | | | | | | | | ANALYZE. EXPLAIN ANALYZE shows the information of the numbers of exact/lossy blocks which bitmap heap scan processes. But, previously, when those numbers were both zero, it displayed only the prefix "Heap Blocks:" in TEXT output format. This is strange and would confuse the users. So this commit suppresses such unnecessary information. Backpatch to 9.4 where EXPLAIN ANALYZE was changed so that such information was displayed. Etsuro Fujita
* Implement IMPORT FOREIGN SCHEMA.Tom Lane2014-07-10
| | | | | | | | | | | This command provides an automated way to create foreign table definitions that match remote tables, thereby reducing tedium and chances for error. In this patch, we provide the necessary core-server infrastructure and implement the feature fully in the postgres_fdw foreign-data wrapper. Other wrappers will throw a "feature not supported" error until/unless they are updated. Ronan Dunklau and Michael Paquier, additional work by me
* Allow CREATE/ALTER DATABASE to manipulate datistemplate and datallowconn.Tom Lane2014-07-01
| | | | | | | | | | | Historically these database properties could be manipulated only by manually updating pg_database, which is error-prone and only possible for superusers. But there seems no good reason not to allow database owners to set them for their databases, so invent CREATE/ALTER DATABASE options to do that. Adjust a couple of places that were doing it the hard way to use the commands instead. Vik Fearing, reviewed by Pavel Stehule
* Refactor CREATE/ALTER DATABASE syntax so options need not be keywords.Tom Lane2014-07-01
| | | | | | | | | | | | | Most of the existing option names are keywords anyway, but we can get rid of LC_COLLATE and LC_CTYPE as keywords known to the lexer/grammar. This immediately reduces the size of the grammar tables by about 8KB, and will save more when we add additional CREATE/ALTER DATABASE options in future. A side effect of the implementation is that the CONNECTION LIMIT option can now also be spelled CONNECTION_LIMIT. We choose not to document this, however. Vik Fearing, based on a suggestion by me; reviewed by Pavel Stehule
* Have multixact be truncated by checkpoint, not vacuumAlvaro Herrera2014-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of truncating pg_multixact at vacuum time, do it only at checkpoint time. The reason for doing it this way is twofold: first, we want it to delete only segments that we're certain will not be required if there's a crash immediately after the removal; and second, we want to do it relatively often so that older files are not left behind if there's an untimely crash. Per my proposal in http://www.postgresql.org/message-id/20140626044519.GJ7340@eldon.alvh.no-ip.org we now execute the truncation in the checkpointer process rather than as part of vacuum. Vacuum is in only charge of maintaining in shared memory the value to which it's possible to truncate the files; that value is stored as part of checkpoints also, and so upon recovery we can reuse the same value to re-execute truncate and reset the oldest-value-still-safe-to-use to one known to remain after truncation. Per bug reported by Jeff Janes in the course of his tests involving bug #8673. While at it, update some comments that hadn't been updated since multixacts were changed. Backpatch to 9.3, where persistency of pg_multixact files was introduced by commit 0ac5ad5134f2.
* Don't allow relminmxid to go backwards during VACUUM FULLAlvaro Herrera2014-06-27
| | | | | | | | | | | | | We were allowing a table's pg_class.relminmxid value to move backwards when heaps were swapped by VACUUM FULL or CLUSTER. There is a similar protection against relfrozenxid going backwards, which we neglected to clone when the multixact stuff was rejiggered by commit 0ac5ad5134f276. Backpatch to 9.3, where relminmxid was introduced. As reported by Heikki in http://www.postgresql.org/message-id/52401AEA.9000608@vmware.com
* Don't allow foreign tables with OIDs.Heikki Linnakangas2014-06-24
| | | | | | | | | | | | | | The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it was still possible with default_with_oids=true. But the rest of the system, including pg_dump, isn't prepared to handle foreign tables with OIDs properly. Backpatch down to 9.1, where foreign tables were introduced. It's possible that there are databases out there that already have foreign tables with OIDs. There isn't much we can do about that, but at least we can prevent them from being created in the future. Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
* Do all-visible handling in lazy_vacuum_page() outside its critical section.Andres Freund2014-06-20
| | | | | | | | | | | | | | | | | | | | | | | | Since fdf9e21196a lazy_vacuum_page() rechecks the all-visible status of pages in the second pass over the heap. It does so inside a critical section, but both visibilitymap_test() and heap_page_is_all_visible() perform operations that should not happen inside one. The former potentially performs IO and both potentially do memory allocations. To fix, simply move all the all-visible handling outside the critical section. Doing so means that the PD_ALL_VISIBLE on the page won't be included in the full page image of the HEAP2_CLEAN record anymore. But that's fine, the flag will be set by the HEAP2_VISIBLE logged later. Backpatch to 9.3 where the problem was introduced. The bug only came to light due to the assertion added in 4a170ee9 and isn't likely to cause problems in production scenarios. The worst outcome is a avoidable PANIC restart. This also gets rid of the difference in the order of operations between master and standby mentioned in 2a8e1ac5. Per reports from David Leverton and Keith Fiske in bug #10533.
* Don't allow to disable backend assertions via the debug_assertions GUC.Andres Freund2014-06-20
| | | | | | | | | | | | | | | | | | | | The existance of the assert_enabled variable (backing the debug_assertions GUC) reduced the amount of knowledge some static code checkers (like coverity and various compilers) could infer from the existance of the assertion. That could have been solved by optionally removing the assertion_enabled variable from the Assert() et al macros at compile time when some special macro is defined, but the resulting complication doesn't seem to be worth the gain from having debug_assertions. Recompiling is fast enough. The debug_assertions GUC is still available, but readonly, as it's useful when diagnosing problems. The commandline/client startup option -A, which previously also allowed to enable/disable assertions, has been removed as it doesn't serve a purpose anymore. While at it, reduce code duplication in bufmgr.c and localbuf.c assertions checking for spurious buffer pins. That code had to be reindented anyway to cope with the assert_enabled removal.
* Fix typosAlvaro Herrera2014-06-12
|
* Prevent auto_explain from changing the output of a user's EXPLAIN.Tom Lane2014-05-20
| | | | | | | | | | | | | | | | | | Commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf, which introduced the EXPLAIN (TIMING) option, for some reason coded explain.c to look at planstate->instrument->need_timer rather than es->timing to decide whether to print timing info. However, the former flag might get set as a result of contrib/auto_explain wanting timing information. We certainly don't want activation of auto_explain to change user-visible statement behavior, so fix that. Also fix an independent bug introduced in the same patch: in the code path for a never-executed node with a machine-friendly output format, if timing was selected, it would fail to print the Actual Rows and Actual Loops items. Per bug #10404 from Tomonari Katsumata. Back-patch to 9.2 where the faulty code was introduced.
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* Rationalize common/relpath.[hc].Tom Lane2014-04-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a73018392636ce832b09b5c31f6ad1f18a4643ea created rather a mess by putting dependencies on backend-only include files into include/common. We really shouldn't do that. To clean it up: * Move TABLESPACE_VERSION_DIRECTORY back to its longtime home in catalog/catalog.h. We won't consider this symbol part of the FE/BE API. * Push enum ForkNumber from relfilenode.h into relpath.h. We'll consider relpath.h as the source of truth for fork numbers, since relpath.c was already partially serving that function, and anyway relfilenode.h was kind of a random place for that enum. * So, relfilenode.h now includes relpath.h rather than vice-versa. This direction of dependency is fine. (That allows most, but not quite all, of the existing explicit #includes of relpath.h to go away again.) * Push forkname_to_number from catalog.c to relpath.c, just to centralize fork number stuff a bit better. * Push GetDatabasePath from catalog.c to relpath.c; it was rather odd that the previous commit didn't keep this together with relpath(). * To avoid needing relfilenode.h in common/, redefine the underlying function (now called GetRelationPath) as taking separate OID arguments, and make the APIs using RelFileNode or RelFileNodeBackend into macro wrappers. (The macros have a potential multiple-eval risk, but none of the existing call sites have an issue with that; one of them had such a risk already anyway.) * Fix failure to follow the directions when "init" fork type was added; specifically, the errhint in forkname_to_number wasn't updated, and neither was the SGML documentation for pg_relation_size(). * Fix tablespace-path-too-long check in CreateTableSpace() to account for fork-name component of maximum-length pathnames. This requires putting FORKNAMECHARS into a header file, but it was rather useless (and actually unreferenced) where it was. The last couple of items are potentially back-patchable bug fixes, if anyone is sufficiently excited about them; but personally I'm not. Per a gripe from Christoph Berg about how include/common wasn't self-contained.
* Allow polymorphic aggregates to have non-polymorphic state data types.Tom Lane2014-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before 9.4, such an aggregate couldn't be declared, because its final function would have to have polymorphic result type but no polymorphic argument, which CREATE FUNCTION would quite properly reject. The ordered-set-aggregate patch found a workaround: allow the final function to be declared as accepting additional dummy arguments that have types matching the aggregate's regular input arguments. However, we failed to notice that this problem applies just as much to regular aggregates, despite the fact that we had a built-in regular aggregate array_agg() that was known to be undeclarable in SQL because its final function had an illegal signature. So what we should have done, and what this patch does, is to decouple the extra-dummy-arguments behavior from ordered-set aggregates and make it generally available for all aggregate declarations. We have to put this into 9.4 rather than waiting till later because it slightly alters the rules for declaring ordered-set aggregates. The patch turned out a bit bigger than I'd hoped because it proved necessary to record the extra-arguments option in a new pg_aggregate column. I'd thought we could just look at the final function's pronargs at runtime, but that didn't work well for variadic final functions. It's probably just as well though, because it simplifies life for pg_dump to record the option explicitly. While at it, fix array_agg() to have a valid final-function signature, and add an opr_sanity test to notice future deviations from polymorphic consistency. I also marked the percentile_cont() aggregates as not needing extra arguments, since they don't.
* Avoid transient bogus page contents when creating a sequence.Heikki Linnakangas2014-04-22
| | | | | | | | | | | | Don't use simple_heap_insert to insert the tuple to a sequence relation. simple_heap_insert creates a heap insertion WAL record, and replaying that will create a regular heap page without the special area containing the sequence magic constant, which is wrong for a sequence. That was not a bug because we always created a sequence WAL record after that, and replaying that overwrote the bogus heap page, and the transient state could never be seen by another backend because it was only done when creating a new sequence relation. But it's simpler and cleaner to avoid that in the first place.
* Set the all-visible flag on heap page before writing WAL record, not after.Heikki Linnakangas2014-04-17
| | | | | | | | | | | | | | | | If we set the all-visible flag after writing WAL record, and XLogInsert takes a full-page image of the page, the image would not include the flag. We will then proceed to set the VM bit, which would then be set without the corresponding all-visible flag on the heap page. Found by comparing page images on master and standby, after writing/replaying each WAL record. (There is still a discrepancy: the all-visible flag won't be set after replaying the HEAP_CLEAN record, even though it is set in the master. However, it will be set when replaying the HEAP2_VISIBLE record and setting the VM bit, so the all-visible flag and VM bit are always consistent on the standby, even though they are momentarily out-of-sync with master) Backpatch to 9.3 where this code was introduced.
* Rename EXPLAIN ANALYZE's "total runtime" output to "execution time".Tom Lane2014-04-16
| | | | | | | | | | | | | Now that EXPLAIN also outputs a "planning time" measurement, the use of "total" here seems rather confusing: it sounds like it might include the planning time which of course it doesn't. Majority opinion was that "execution time" is a better label, so we'll call it that. This should be noted as a backwards incompatibility for tools that examine EXPLAIN ANALYZE output. In passing, I failed to resist the temptation to do a little editing on the materialized-view example affected by this change.
* Improve some O(N^2) behavior in window function evaluation.Tom Lane2014-04-13
| | | | | | | | | | | | | | | | | | | Repositioning the tuplestore seek pointer in window_gettupleslot() turns out to be a very significant expense when the window frame is sizable and the frame end can move. To fix, introduce a tuplestore function for skipping an arbitrary number of tuples in one call, parallel to the one we introduced for tuplesort objects in commit 8d65da1f. This reduces the cost of window_gettupleslot() to O(1) if the tuplestore has not spilled to disk. As in the previous commit, I didn't try to do any real optimization of tuplestore_skiptuples for the case where the tuplestore has spilled to disk. There is probably no practical way to get the cost to less than O(N) anyway, but perhaps someone can think of something later. Also fix PersistHoldablePortal() to make use of this API now that we have it. Based on a suggestion by Dean Rasheed, though this turns out not to look much like his patch.
* Make security barrier views automatically updatableStephen Frost2014-04-12
| | | | | | | | | | | | | | | | | Views which are marked as security_barrier must have their quals applied before any user-defined quals are called, to prevent user-defined functions from being able to see rows which the security barrier view is intended to prevent them from seeing. Remove the restriction on security barrier views being automatically updatable by adding a new securityQuals list to the RTE structure which keeps track of the quals from security barrier views at each level, independently of the user-supplied quals. When RTEs are later discovered which have securityQuals populated, they are turned into subquery RTEs which are marked as security_barrier to prevent any user-supplied quals being pushed down (modulo LEAKPROOF quals). Dean Rasheed, reviewed by Craig Ringer, Simon Riggs, KaiGai Kohei