aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Make new triggers tests more robustPeter Eisentraut2018-02-07
| | | | | | Add explicit collation on the trigger name to avoid locale dependencies. Also restrict the tables selected, to avoid interference from concurrently running tests.
* Add more information_schema columnsPeter Eisentraut2018-02-07
| | | | | | | | | - table_constraints.enforced - triggers.action_order - triggers.action_reference_old_table - triggers.action_reference_new_table Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Update out-of-date comment in StartupXLOG.Robert Haas2018-02-07
| | | | | | | | | Commit 4b0d28de06b28e57c540fca458e4853854fbeaf8 should have updated this comment, but did not. Thomas Munro Discussion: http://postgr.es/m/CAEepm=0iJ8aqQcF9ij2KerAkuHF3SwrVTzjMdm1H4w++nfBf9A@mail.gmail.com
* Remove prototype for fmgr() function, which no longer exists.Robert Haas2018-02-07
| | | | | | | | | | Commit 5ded4bd21403e143dd3eb66b92d52732fdac1945 removed the code for this function, but neglected to remove the prototype and associated comments. Dagfinn Ilmari Mannsåker Discussion: http://postgr.es/m/d8j4lmuxjzk.fsf@dalvik.ping.uio.no
* Support all SQL:2011 options for window frame clauses.Tom Lane2018-02-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the ability to use "RANGE offset PRECEDING/FOLLOWING" frame boundaries in window functions. We'd punted on that back in the original patch to add window functions, because it was not clear how to do it in a reasonably data-type-extensible fashion. That problem is resolved here by adding the ability for btree operator classes to provide an "in_range" support function that defines how to add or subtract the RANGE offset value. Factoring it this way also allows the operator class to avoid overflow problems near the ends of the datatype's range, if it wishes to expend effort on that. (In the committed patch, the integer opclasses handle that issue, but it did not seem worth the trouble to avoid overflow failures for datetime types.) The patch includes in_range support for the integer_ops opfamily (int2/int4/int8) as well as the standard datetime types. Support for other numeric types has been requested, but that seems like suitable material for a follow-on patch. In addition, the patch adds GROUPS mode which counts the offset in ORDER-BY peer groups rather than rows, and it adds the frame_exclusion options specified by SQL:2011. As far as I can see, we are now fully up to spec on window framing options. Existing behaviors remain unchanged, except that I changed the errcode for a couple of existing error reports to meet the SQL spec's expectation that negative "offset" values should be reported as SQLSTATE 22013. Internally and in relevant parts of the documentation, we now consistently use the terminology "offset PRECEDING/FOLLOWING" rather than "value PRECEDING/FOLLOWING", since the term "value" is confusingly vague. Oliver Ford, reviewed and whacked around some by me Discussion: https://postgr.es/m/CAGMVOdu9sivPAxbNN0X+q19Sfv9edEPv=HibOJhB14TJv_RCQg@mail.gmail.com
* Fix incorrect grammar.Robert Haas2018-02-06
| | | | | | Etsuro Fujita Discussion: http://postgr.es/m/5A7981EA.8020201@lab.ntt.co.jp
* Avoid valgrind complaint about write() of uninitalized bytes.Robert Haas2018-02-06
| | | | | | | | | | | | | | | | | | | | LogicalTapeFreeze() may write out its first block when it is dirty but not full, and then immediately read the first block back in from its BufFile as a BLCKSZ-width block. This can only occur in rare cases where very few tuples were written out, which is currently only possible with parallel external tuplesorts. To avoid valgrind complaints, tell it to treat the tail of logtape.c's buffer as defined. Commit 9da0cc35284bdbe8d442d732963303ff0e0a40bc exposed this problem but did not create it. LogicalTapeFreeze() has always tended to write out some amount of garbage bytes, but previously never wrote less than one block of data in total, so the problem was masked. Per buildfarm members lousyjack and skink. Peter Geoghegan, based on a suggestion from Tom Lane and me. Some comment revisions by me.
* Doc: move info for btree opclass implementors into main documentation.Tom Lane2018-02-06
| | | | | | | | | | | | | | | Up to now, useful info for writing a new btree opclass has been buried in the backend's nbtree/README file. Let's move it into the SGML docs, in preparation for extending it with info about "in_range" functions in the upcoming window RANGE patch. To do this, I chose to create a new chapter for btree indexes in Part VII (Internals), parallel to the chapters that exist for the newer index AMs. This is a pretty short chapter as-is. At some point somebody might care to flesh it out with more detail about btree internals, but that is beyond the scope of my ambition for today. Discussion: https://postgr.es/m/23141.1517874668@sss.pgh.pa.us
* Fix possible crash in partition-wise join.Robert Haas2018-02-05
| | | | | | | | | | | The previous code assumed that we'd always succeed in creating child-joins for a joinrel for which partition-wise join was considered, but that's not guaranteed, at least in the case where dummy rels are involved. Ashutosh Bapat, with some wordsmithing by me. Discussion: http://postgr.es/m/CAFjFpRf8=uyMYYfeTBjWDMs1tR5t--FgOe2vKZPULxxdYQ4RNw@mail.gmail.com
* Ensure that all temp files made during pg_upgrade are non-world-readable.Tom Lane2018-02-05
| | | | | | | | | | | | | | | | | | | | | | | pg_upgrade has always attempted to ensure that the transient dump files it creates are inaccessible except to the owner. However, refactoring in commit 76a7650c4 broke that for the file containing "pg_dumpall -g" output; since then, that file was protected according to the process's default umask. Since that file may contain role passwords (hopefully encrypted, but passwords nonetheless), this is a particularly unfortunate oversight. Prudent users of pg_upgrade on multiuser systems would probably run it under a umask tight enough that the issue is moot, but perhaps some users are depending only on pg_upgrade's umask changes to protect their data. To fix this in a future-proof way, let's just tighten the umask at process start. There are no files pg_upgrade needs to write at a weaker security level; and if there were, transiently relaxing the umask around where they're created would be a safer approach. Report and patch by Tom Lane; the idea for the fix is due to Noah Misch. Back-patch to all supported branches. Security: CVE-2018-1053
* Fix RelationBuildPartitionKey's processing of partition key expressions.Tom Lane2018-02-05
| | | | | | | | | | | | | Failure to advance the list pointer while reading partition expressions from a list results in invoking an input function with inappropriate data, possibly leading to crashes or, with carefully crafted input, disclosure of arbitrary backend memory. Bug discovered independently by Álvaro Herrera and David Rowley. This patch is by Álvaro but owes something to David's proposed fix. Back-patch to v10 where the issue was introduced. Security: CVE-2018-1052
* Skip setting up shared instrumentation for Hash node if not needed.Tom Lane2018-02-04
| | | | | | | | | | | | | | We don't need to set up the shared space for hash join instrumentation data if instrumentation hasn't been requested. Let's follow the example of the similar Sort node code and save a few cycles by skipping that when we can. This reverts commit d59ff4ab3 and instead allows us to use the safer choice of passing noError = false to shm_toc_lookup in ExecHashInitializeWorker, since if we reach that call there should be a TOC entry to be found. Thomas Munro Discussion: https://postgr.es/m/E1ehkoZ-0005uW-43%40gemulon.postgresql.org
* Fix another instance of unsafe coding for shm_toc_lookup failure.Tom Lane2018-02-02
| | | | | | | | One or another author of commit 5bcf389ec seems to have thought that computing an offset from a NULL pointer would yield another NULL pointer. There may possibly be architectures where that works, but common machines don't work like that. Per a quick code review of places calling shm_toc_lookup and not using noError = false.
* Be more wary about shm_toc_lookup failure.Tom Lane2018-02-02
| | | | | | | | | | | | | | | | Commit 445dbd82a basically missed the point of commit d46633506, which was that we shouldn't allow shm_toc_lookup() failure to lead to a core dump or assertion crash, because the odds of such a failure should never be considered negligible. It's correct that we can't expect the PARALLEL_KEY_ERROR_QUEUE TOC entry to be there if we have no workers. But if we have no workers, we're not going to do anything in this function with the lookup result anyway, so let's just skip it. That lets the code use the easy-to-prove-safe noError=false case, rather than anything requiring effort to review. Back-patch to v10, like the previous commit. Discussion: https://postgr.es/m/3647.1517601675@sss.pgh.pa.us
* Fix application of identity values in some casesPeter Eisentraut2018-02-02
| | | | | | | | | | | | | | | | | | | | Investigation of 2d2d06b7e27e3177d5bef0061801c75946871db3 revealed that identity values were not applied in some further cases, including logical replication subscribers, VALUES RTEs, and ALTER TABLE ... ADD COLUMN. To fix all that, apply the identity column expression in build_column_default() instead of repeating the same logic at each call site. For ALTER TABLE ... ADD COLUMN ... IDENTITY, the previous coding completely ignored that existing rows for the new column should have values filled in from the identity sequence. The coding using build_column_default() fails for this because the sequence ownership isn't registered until after ALTER TABLE, and we can't do it before because we don't have the column in the catalog yet. So we specially remember in ColumnDef the sequence name that we decided on and build a custom NextValueExpr using that. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Support parallel btree index builds.Robert Haas2018-02-02
| | | | | | | | | | | | | | | | | | | | | | To make this work, tuplesort.c and logtape.c must also support parallelism, so this patch adds that infrastructure and then applies it to the particular case of parallel btree index builds. Testing to date shows that this can often be 2-3x faster than a serial index build. The model for deciding how many workers to use is fairly primitive at present, but it's better than not having the feature. We can refine it as we get more experience. Peter Geoghegan with some help from Rushabh Lathia. While Heikki Linnakangas is not an author of this patch, he wrote other patches without which this feature would not have been possible, and therefore the release notes should possibly credit him as an author of this feature. Reviewed by Claudio Freire, Heikki Linnakangas, Thomas Munro, Tels, Amit Kapila, me. Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
* Refactor code for partition bound searchingRobert Haas2018-02-02
| | | | | | | | | | | | | | | | | | | | | | | | Remove partition_bound_cmp() and partition_bound_bsearch(), whose void * argument could be, depending on the situation, of any of three different types: PartitionBoundSpec *, PartitionRangeBound *, Datum *. Instead, introduce separate bound-searching functions for each situation: partition_list_bsearch, partition_range_bsearch, partition_range_datum_bsearch, and partition_hash_bsearch. This requires duplicating the code for binary search, but it makes the code much more type safe, involves fewer branches at runtime, and at least in my opinion, is much easier to understand. Along the way, add an option to partition_range_datum_bsearch allowing the number of keys to be specified, so that we can search for partitions based on a prefix of the full list of partition keys. This is important for pending work to improve partition pruning. Amit Langote, per a suggestion from me. Discussion: http://postgr.es/m/CA+TgmoaVLDLc8=YESRwD32gPhodU_ELmXyKs77gveiYp+JE4vQ@mail.gmail.com
* Add new function WaitForParallelWorkersToAttach.Robert Haas2018-02-02
| | | | | | | | | | | | | | | | Once this function has been called, we know that all workers have started and attached to their error queues -- so if any of them subsequently exit uncleanly, we'll be sure to throw an ERROR promptly. Otherwise, users of the ParallelContext machinery must be careful not to wait forever for a worker that has failed to start. Parallel query manages to work without needing this for reasons explained in new comments added by this patch, but it's a useful primitive for other parallel operations, such as the pending patch to make creating a btree index run in parallel. Amit Kapila, revised by me. Additional review by Peter Geoghegan. Discussion: http://postgr.es/m/CAA4eK1+e2MzyouF5bg=OtyhDSX+=Ao=3htN=T-r_6s3gCtKFiw@mail.gmail.com
* Fix possible failure to mark hash metapage dirty.Robert Haas2018-02-01
| | | | | | | Report and suggested fix by Lixian Zou. Amit Kapila put it in the form of a patch and reviewed. Discussion: http://postgr.es/m/151739848647.1239.12528851873396651946@wrigleys.postgresql.org
* psql: Add quit/help behavior/hint, for other tool portabilityBruce Momjian2018-02-01
| | | | | | | | | | | | | | Issuing 'quit'/'exit' in an empty psql buffer exits psql. Issuing 'quit'/'exit' in a non-empty psql buffer alone on a line with no prefix whitespace issues a hint on how to exit. Also add similar 'help' hints for 'help' in a non-empty psql buffer. Reported-by: Everaldo Canuto Discussion: https://postgr.es/m/flat/CALVFHFb-C_5_94hueWg6Dd0zu7TfbpT7hzsh9Zf0DEDOSaAnfA%40mail.gmail.com Author: original author Robert Haas, modified by me
* Fix typo: colums -> columns.Robert Haas2018-01-31
| | | | | | | | Along the way, also fix code indentation. Alexander Lakhin, reviewed by Michael Paquier Discussion: http://postgr.es/m/45c44aa7-7cfa-7f3b-83fd-d8300677fdda@gmail.com
* Fix list partition constraints for partition keys of array type.Robert Haas2018-01-31
| | | | | | | | | | | | | | | | | The old code generated always generated a constraint of the form col = ANY(ARRAY[val1, val2, ...]), but that's invalid when col is an array type. Instead, generate col = val when there's only one value, col = val1 OR col = val2 OR ... when there are multiple values and col is of array type, and the old form when there are multiple values and col is not of an array type. As a side benefit, this makes constraint exclusion able to prune a list partition declared to accept a single Boolean value, which didn't work before. Amit Langote, reviewed by Etsuro Fujita Discussion: http://postgr.es/m/97267195-e235-89d1-a41a-c110198dfce9@lab.ntt.co.jp
* Refactor client-side SSL certificate checking codePeter Eisentraut2018-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | Separate the parts specific to the SSL library from the general logic. The previous code structure was open_client_SSL() calls verify_peer_name_matches_certificate() calls verify_peer_name_matches_certificate_name() calls wildcard_certificate_match() and was completely in fe-secure-openssl.c. The new structure is open_client_SSL() [openssl] calls pq_verify_peer_name_matches_certificate() [generic] calls pgtls_verify_peer_name_matches_certificate_guts() [openssl] calls openssl_verify_peer_name_matches_certificate_name() [openssl] calls pq_verify_peer_name_matches_certificate_name() [generic] calls wildcard_certificate_match() [generic] Move the generic functions into a new file fe-secure-common.c, so the calls generally go fe-connect.c -> fe-secure.c -> fe-secure-${impl}.c -> fe-secure-common.c, although there is a bit of back-and-forth between the last two. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Fix up references to scram-sha-256Peter Eisentraut2018-01-30
| | | | | | | | | | pg_hba_file_rules erroneously reported this as scram-sha256. Fix that. To avoid future errors and confusion, also adjust documentation links and internal symbols to have a separator between "sha" and "256". Reported-by: Christophe Courtois <christophe.courtois@dalibo.com> Author: Michael Paquier <michael.paquier@gmail.com>
* Add some noreturn attributes to help static analyzersPeter Eisentraut2018-01-29
|
* Silence complaint about dead assignmentPeter Eisentraut2018-01-29
| | | | | | The preferred place for "placate compiler" assignments is after elog(ERROR), not before it. Otherwise, scan-build complains about a dead assignment.
* Introduce ExecQualAndReset() helper.Andres Freund2018-01-29
| | | | | | | | | | | | | | | | It's a common task to evaluate a qual and reset the corresponding expression context. Currently that requires storing the result of the qual eval, resetting the context, and then reacting on the result. As that's awkward several places only reset the context next time through a node. That's not great, so introduce a helper that evaluates and resets. It's a bit ugly that it currently uses MemoryContextReset() instead of ResetExprContext(), but that seems easier than reordering all of executor.h. Author: Andres Freund Discussion: https://postgr.es/m/20180109222544.f7loxrunqh3xjl5f@alap3.anarazel.de
* Save a few bytes by removing useless last argument to SearchCatCacheList.Tom Lane2018-01-29
| | | | | | | | | | | | | | | | | | | | | | There's never any value in giving a fully specified cache key to SearchCatCacheList: you might as well call SearchCatCache instead, since there could be only one match. So the maximum useful number of key arguments is one less than the supported number of key columns. We might as well remove the useless extra argument and save some few bytes per call site, as well as a cycle or so per call. I believe the reason it was coded like this is that originally, callers had to write out all the dummy arguments in each call, and so it seemed less confusing if SearchCatCache and SearchCatCacheList took the same number of key arguments. But since commit e26c539e9, callers only write their live arguments explicitly, making that a non-factor; and there's surely been enough time for third-party modules to adapt to that coding style. So this is only an ABI break not an API break for callers. Per discussion with Oliver Ford, this might also make it less confusing how to use SearchCatCacheList correctly. Discussion: https://postgr.es/m/27788.1517069693@sss.pgh.pa.us
* Initialize unused ExprEvalStep fields.Andres Freund2018-01-29
| | | | | | | | | | ExecPushExprSlots didn't initialize ExprEvalStep's resvalue/resnull steps as it didn't use them. That caused wrong valgrind warnings for an upcoming patch, so zero-intialize. Also zero-initialize all scratch ExprEvalStep's allocated on the stack, to avoid issues with similar future omissions of non-critial data.
* Prevent growth of simplehash tables when they're "too empty".Andres Freund2018-01-29
| | | | | | | | | | | | | | | | | | | | In cases where simplehash tables where filled with either a lot of conflicting hash-values, or values that hash to consecutive values (i.e. build "chains") the growth heuristics in d4c62a6b623d6eef88218158e9fa3cf974c6c7e5 could trigger rather explosively. To fix that, address some of the reasons (see previous commit) of why the growth heuristics where needed, and only allow growth when the table isn't too empty. While that means there's a few cases of bad input that can be slower, that seems a lot better than running very quickly out of memory. Author: Tomas Vondra and Andres Freund, with additional input by Thomas Munro, Tom Lane Todd A. Cook Reported-By: Todd A. Cook, Tomas Vondra, Thomas Munro Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org Backpatch: 10, where simplehash was introduced
* Improve bit perturbation in TupleHashTableHash.Andres Freund2018-01-29
| | | | | | | | | | | | | | | | The changes in b81b5a96f424531b97cdd1dba97d9d1b9c9d372e did not fully address the issue, because the bit-mixing of the IV into the final hash-key didn't prevent clustering in the input-data survive in the output data. This didn't cause a lot of problems because of the additional growth conditions added d4c62a6b623d6eef88218158e9fa3cf974c6c7e5. But as we want to rein those in due to explosive growth in some edges, this needs to be fixed. Author: Andres Freund Discussion: https://postgr.es/m/20171127185700.1470.20362@wrigleys.postgresql.org Backpatch: 10, where simplehash was introduced
* Avoid misleading psql password prompt when username is multiply specified.Tom Lane2018-01-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a password is needed, cases such as psql -d "postgresql://alice@localhost/testdb" -U bob would incorrectly prompt for "Password for user bob: ", when actually the connection will be attempted with username alice. The priority order of which name to use isn't that important here, but the misleading prompt is. When we are prompting for a password after initial connection failure, we can fix this reliably by looking at PQuser(conn) to see how libpq interpreted the connection arguments. But when we're doing a forced password prompt because of a -W switch, we can't use that solution. Fortunately, because the main use of -W is for noninteractive situations, it's less critical to produce a helpful prompt in such cases. I made the startup prompt for -W just say "Password: " all the time, rather than expending extra code on trying to identify which username to use. In the case of a \c command (after -W has been given), there's already logic in do_connect that determines whether the "dbname" is a connstring or URI, so we can avoid lobotomizing the prompt except in cases that are actually dubious. (We could do similarly in startup.c if anyone complains, but for now it seems not worthwhile, especially since that would still be only a partial solution.) Per bug #15025 from Akos Vandra. Although this is arguably a bug fix, it doesn't seem worth back-patching. The case where it matters seems like a very corner-case usage, and someone might complain that we'd changed the behavior of -W in a minor release. Discussion: https://postgr.es/m/20180123130013.7407.24749@wrigleys.postgresql.org
* Add stack-overflow guards in set-operation planning.Tom Lane2018-01-28
| | | | | | | | | | | | | | | | | | | | | | | create_plan_recurse lacked any stack depth check. This is not per our normal coding rules, but I'd supposed it was safe because earlier planner processing is more complex and presumably should eat more stack. But bug #15033 from Andrew Grossman shows this isn't true, at least not for queries having the form of a many-thousand-way INTERSECT stack. Further testing showed that recurse_set_operations is also capable of being crashed in this way, since it likewise will recurse to the bottom of a parsetree before calling any support functions that might themselves contain any stack checks. However, its stack consumption is only perhaps a third of create_plan_recurse's. It's possible that this particular problem with create_plan_recurse can only manifest in 9.6 and later, since before that we didn't build a Path tree for set operations. But having seen this example, I now have no faith in the proposition that create_plan_recurse doesn't need a stack check, so back-patch to all supported branches. Discussion: https://postgr.es/m/20180127050845.28812.58244@wrigleys.postgresql.org
* C includes: Reorder C includes in partition.cBruce Momjian2018-01-27
| | | | | | Discussion: https://postgr.es/m/5A69AA50.2060600@lab.ntt.co.jp Author: Etsuro Fujita
* Update time zone data files to tzdata release 2018c.Tom Lane2018-01-27
| | | | | | DST law changes in Brazil, Sao Tome and Principe. Historical corrections for Bolivia, Japan, and South Sudan. The "US/Pacific-New" zone has been removed (it was only a link to America/Los_Angeles anyway).
* Avoid crash during EvalPlanQual recheck of an inner indexscan.Tom Lane2018-01-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 09529a70b changed nodeIndexscan.c and nodeIndexonlyscan.c to postpone initialization of the indexscan proper until the first tuple fetch. It overlooked the question of mark/restore behavior, which means that if some caller attempts to mark the scan before the first tuple fetch, you get a null pointer dereference. The only existing user of mark/restore is nodeMergejoin.c, which (somewhat accidentally) will never attempt to set a mark before the first inner tuple unless the inner child node is a Material node. Hence the case can't arise normally, so it seems sufficient to document the assumption at both ends. However, during an EvalPlanQual recheck, ExecScanFetch doesn't call IndexNext but just returns the jammed-in test tuple. Therefore, if we're doing a recheck in a plan tree with a mergejoin with inner indexscan, it's possible to reach ExecIndexMarkPos with iss_ScanDesc still null, as reported by Guo Xiang Tan in bug #15032. Really, when there's a test tuple supplied during an EPQ recheck, touching the index at all is the wrong thing: rather, the behavior of mark/restore ought to amount to saving and restoring the es_epqScanDone flag. We can avoid finding a place to actually save the flag, for the moment, because given the assumption that no caller will set a mark before fetching a tuple, es_epqScanDone must always be set by the time we try to mark. So the actual behavior change required is just to not reach the index access if a test tuple is supplied. The set of plan node types that need to consider this issue are those that support EPQ test tuples (i.e., call ExecScan()) and also support mark/restore; which is to say, IndexScan, IndexOnlyScan, and perhaps CustomScan. It's tempting to try to fix the problem in one place by teaching ExecMarkPos() itself about EPQ; but ExecMarkPos supports some plan types that aren't Scans, and also it seems risky to make assumptions about what a CustomScan wants to do here. Also, the most likely future change here is to decide that we do need to support marks placed before the first tuple, which would require additional work in IndexScan and IndexOnlyScan in any case. Hence, fix the EPQ issue in nodeIndexscan.c and nodeIndexonlyscan.c, accepting the small amount of code duplicated thereby, and leave it to CustomScan providers to fix this bug if they have it. Back-patch to v10 where commit 09529a70b came in. In earlier branches, the index_markpos() call is a waste of cycles when EPQ is active, but no more than that, so it doesn't seem appropriate to back-patch further. Discussion: https://postgr.es/m/20180126074932.3098.97815@wrigleys.postgresql.org
* Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.Tom Lane2018-01-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a lot of code in which option names, which from the user's viewpoint are logically keywords, are passed through the grammar as plain identifiers, and then matched to string literals during command execution. This approach avoids making words into lexer keywords unnecessarily. Some places matched these strings using plain strcmp, some using pg_strcasecmp. But the latter should be unnecessary since identifiers would have been downcased on their way through the parser. Aside from any efficiency concerns (probably not a big factor), the lack of consistency in this area creates a hazard of subtle bugs due to different places coming to different conclusions about whether two option names are the same or different. Hence, standardize on using strcmp() to match any option names that are expected to have been fed through the parser. This does create a user-visible behavioral change, which is that while formerly all of these would work: alter table foo set (fillfactor = 50); alter table foo set (FillFactor = 50); alter table foo set ("fillfactor" = 50); alter table foo set ("FillFactor" = 50); now the last case will fail because that double-quoted identifier is different from the others. However, none of our documentation says that you can use a quoted identifier in such contexts at all, and we should discourage doing so since it would break if we ever decide to parse such constructs as true lexer keywords rather than poor man's substitutes. So this shouldn't create a significant compatibility issue for users. Daniel Gustafsson, reviewed by Michael Paquier, small changes by me Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
* Factor some code out of create_grouping_paths.Robert Haas2018-01-26
| | | | | | | | | | | | | | This is preparatory refactoring to prepare the way for partition-wise aggregate, which will reuse the new subroutines for child grouping rels. It also does not seem like a bad idea on general principle, as the function was getting pretty long. Jeevan Chalke. The larger patch series of which this patch is a part was reviewed and tested by Antonin Houska, Rajkumar Raghuwanshi, Ashutosh Bapat, David Rowley, Dilip Kumar, Konstantin Knizhnik, Pascal Legrand, and me. Some cosmetic changes by me. Discussion: http://postgr.es/m/CAM2+6=V64_xhstVHie0Rz=KPEQnLJMZt_e314P0jaT_oJ9MR8A@mail.gmail.com
* Remove the obsolete WITH clause of CREATE FUNCTION.Tom Lane2018-01-26
| | | | | | | | | | | | | This clause was superseded by SQL-standard syntax back in 7.3. We've kept it around for backwards-compatibility purposes ever since; but 15 years seems like long enough for that, especially seeing that there are undocumented weirdnesses in how it interacts with the SQL-standard syntax for specifying the same options. Michael Paquier, per an observation by Daniel Gustafsson; some small cosmetic adjustments to nearby code by me. Discussion: https://postgr.es/m/20180115022748.GB1724@paquier.xyz
* Use abstracted SSL API in server connection log messagesPeter Eisentraut2018-01-26
| | | | | | | | | | | | | | | | The existing "connection authorized" server log messages used OpenSSL API calls directly, even though similar abstracted API calls exist. Change to use the latter instead. Change the function prototype for the functions that return the TLS version and the cipher to return const char * directly instead of copying into a buffer. That makes them slightly easier to use. Add bits= to the message. psql shows that, so we might as well show the same information on the client and server. Reviewed-by: Daniel Gustafsson <daniel@yesql.se> Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Remove byte-masking macros for Datum conversion macrosPeter Eisentraut2018-01-26
| | | | | | | | As the comment there stated, these were needed for old-style user-defined functions, but since we removed support for those, we don't need this anymore. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Support --no-comments in pg_dump, pg_dumpall, pg_restore.Tom Lane2018-01-25
| | | | | | | | | | | | | | | | | | We have switches already to suppress other subsidiary object properties, such as ACLs, security labels, ownership, and tablespaces, so just on the grounds of symmetry we should allow suppressing comments as well. Also, commit 0d4e6ed30 added a positive reason to have this feature, i.e. to allow obtaining the old behavior of selective pg_restore should anyone desire that. Recent commits have removed the cases where pg_dump emitted comments on built-in objects that the restoring user might not have privileges to comment on, so the original primary motivation for this feature is gone, but it still seems at least somewhat useful in its own right. Robins Tharakan, reviewed by Fabrízio Mello Discussion: https://postgr.es/m/CAEP4nAx22Z4ch74oJGzr5RyyjcyUSbpiFLyeYXX8pehfou92ug@mail.gmail.com
* Add missing "static" markers.Tom Lane2018-01-25
| | | | Per buildfarm.
* Clean up some aspects of pg_dump/pg_restore item-selection logic.Tom Lane2018-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | Ensure that CREATE DATABASE and related commands are issued when, and only when, --create is specified. Previously there were scenarios where using selective-dump switches would prevent --create from having any effect. For example, it would fail to do anything in pg_restore if the archive file had been made by a selective dump, because there would be no TOC entry for the database. Since we don't issue \connect either if we don't issue CREATE DATABASE, this could result in unexpectedly restoring objects into the wrong database. Also fix pg_restore's selective restore logic so that when an object is selected to be restored, we also restore its ACL, comment, and security label if any. Previously there was no way to get the latter properties except through tedious mucking about with a -L file. If, for some reason, you don't want these properties, you can match the old behavior by adding --no-acl etc. While at it, try to make _tocEntryRequired() a little better organized and better documented. Discussion: https://postgr.es/m/32668.1516848577@sss.pgh.pa.us
* Ignore partitioned indexes where appropriateAlvaro Herrera2018-01-25
| | | | | | | | | | | | get_relation_info() was too optimistic about opening indexes in partitioned tables, which would raise errors when any queries were planned on such tables. Fix by ignoring any indexes of the partitioned kind. CLUSTER (and ALTER TABLE CLUSTER ON) had a similar problem. Fix by disallowing these commands in partitioned tables. Fallout from 8b08f7d4820f.
* Improve pg_dump's handling of "special" built-in objects.Tom Lane2018-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We had some pretty ad-hoc handling of the public schema and the plpgsql extension, which are both presumed to exist in template0 but might be modified or deleted in the database being dumped. Up to now, by default pg_dump would emit a CREATE EXTENSION IF NOT EXISTS command as well as a COMMENT command for plpgsql. The usefulness of the former is questionable, and the latter caused annoying errors in non-superuser dump/restore scenarios. Let's instead install a rule that built-in extensions (identified by having low-numbered OIDs) are not to be dumped. We were doing it that way already in binary-upgrade mode, so this just makes regular mode behave the same. It remains true that if someone has installed a non-default ACL on the plpgsql language, that will get dumped thanks to the pg_init_privs mechanism. This is more consistent with the handling of built-in objects of other kinds. Also, change the very ad-hoc mechanism that was used to avoid dumping creation and comment commands for the public schema. Instead of hardwiring a test in _printTocEntry(), make use of the DUMP_COMPONENT_ infrastructure to mark that schema up-front about what we want to do with it. This has the visible effect that the public schema won't be mentioned in the output at all, except for updating its ACL if it has a non-default ACL. Previously, while it was normally not mentioned, --clean mode would drop and recreate it, again causing headaches for non-superuser usage. This change likewise makes the public schema less special and more like other built-in objects. If plpgsql, or the public schema, has been removed entirely in the source DB, that situation won't be reproduced in the destination ... but that was true before. Discussion: https://postgr.es/m/29048.1516812451@sss.pgh.pa.us
* Remove use of byte-masking macros in record_image_cmpPeter Eisentraut2018-01-25
| | | | | | | | These were introduced in 4cbb646334b3b998a29abef0d57608d42097e6c9, but after further analysis and testing, they should not be necessary and probably weren't the part of that commit that fixed anything. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Allow spaces in connection strings in SSL testsPeter Eisentraut2018-01-25
| | | | | | | | | | Connection strings can have items with spaces in them, wrapped in quotes. The tests however ran a SELECT '$connstr' upon connection which broke on the embedded quotes. Use dollar quotes on the connstr to protect against this. This was hit during the development of the macOS Secure Transport patch, but is independent of it. Author: Daniel Gustafsson <daniel@yesql.se>
* Avoid referencing off the end of subplan_partition_offsets.Robert Haas2018-01-24
| | | | | | | Report by buildfarm member skink and Tom Lane. Analysis by me. Patch by Amit Khandekar. Discussion: http://postgr.es/m/CAJ3gD9fVA1iXQYhfqHP5n_TEd4U9=V8TL_cc-oKRnRmxgdvJrQ@mail.gmail.com
* Add tests for record_image_eq and record_image_cmpPeter Eisentraut2018-01-24
| | | | | | | | | | record_image_eq was covered a bit by the materialized view code that it is meant to support, but record_image_cmp was not tested at all. While we're here, add more tests to record_eq and record_cmp as well, for symmetry. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>