aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
* Fix typos in commentsMagnus Hagander2015-05-17
| | | | Dmitriy Olshevskiy
* Fix whitespacePeter Eisentraut2015-05-16
|
* pg_upgrade: no need to check for matching float8_pass_by_valueBruce Momjian2015-05-16
| | | | Report by Noah Misch
* More portability fixing for bipartite_match.c.Tom Lane2015-05-16
| | | | <float.h> is required for isinf() on some platforms. Per buildfarm.
* pg_upgrade: force timeline 1 in the new clusterBruce Momjian2015-05-16
| | | | | | | | | | Previously, this prevented promoted standby servers from being upgraded because of a missing WAL history file. (Timeline 1 doesn't need a history file, and we don't copy WAL files anyway.) Report by Christian Echerer(?), Alexey Klyukin Backpatch through 9.0
* pg_upgrade: only allow template0 to be non-connectableBruce Momjian2015-05-16
| | | | | | | | | | | | | | | | This patch causes pg_upgrade to error out during its check phase if: (1) template0 is marked connectable or (2) any other database is marked non-connectable This is done because, in the first case, pg_upgrade would fail because the pg_dumpall --globals restore would fail, and in the second case, the database would not be restored, leading to data loss. Report by Matt Landry (1), Stephen Frost (2) Backpatch through 9.0
* Avoid direct use of INFINITY.Tom Lane2015-05-15
| | | | It's not very portable. Per buildfarm.
* Support GROUPING SETS, CUBE and ROLLUP.Andres Freund2015-05-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This SQL standard functionality allows to aggregate data by different GROUP BY clauses at once. Each grouping set returns rows with columns grouped by in other sets set to NULL. This could previously be achieved by doing each grouping as a separate query, conjoined by UNION ALLs. Besides being considerably more concise, grouping sets will in many cases be faster, requiring only one scan over the underlying data. The current implementation of grouping sets only supports using sorting for input. Individual sets that share a sort order are computed in one pass. If there are sets that don't share a sort order, additional sort & aggregation steps are performed. These additional passes are sourced by the previous sort step; thus avoiding repeated scans of the source data. The code is structured in a way that adding support for purely using hash aggregation or a mix of hashing and sorting is possible. Sorting was chosen to be supported first, as it is the most generic method of implementation. Instead of, as in an earlier versions of the patch, representing the chain of sort and aggregation steps as full blown planner and executor nodes, all but the first sort are performed inside the aggregation node itself. This avoids the need to do some unusual gymnastics to handle having to return aggregated and non-aggregated tuples from underlying nodes, as well as having to shut down underlying nodes early to limit memory usage. The optimizer still builds Sort/Agg node to describe each phase, but they're not part of the plan tree, but instead additional data for the aggregation node. They're a convenient and preexisting way to describe aggregation and sorting. The first (and possibly only) sort step is still performed as a separate execution step. That retains similarity with existing group by plans, makes rescans fairly simple, avoids very deep plans (leading to slow explains) and easily allows to avoid the sorting step if the underlying data is sorted by other means. A somewhat ugly side of this patch is having to deal with a grammar ambiguity between the new CUBE keyword and the cube extension/functions named cube (and rollup). To avoid breaking existing deployments of the cube extension it has not been renamed, neither has cube been made a reserved keyword. Instead precedence hacking is used to make GROUP BY cube(..) refer to the CUBE grouping sets feature, and not the function cube(). To actually group by a function cube(), unlikely as that might be, the function name has to be quoted. Needs a catversion bump because stored rules may change. Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion: CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw@mail.gmail.com
* Update time zone data files to tzdata release 2015d.Tom Lane2015-05-15
| | | | | | DST law changes in Egypt, Mongolia, Palestine. Historical corrections for Canada and Chile. Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).
* Add BRIN infrastructure for "inclusion" opclassesAlvaro Herrera2015-05-15
| | | | | | | | | | | | | | | | This lets BRIN be used with R-Tree-like indexing strategies. Also provided are operator classes for range types, box and inet/cidr. The infrastructure provided here should be sufficient to create operator classes for similar datatypes; for instance, opclasses for PostGIS geometries should be doable, though we didn't try to implement one. (A box/point opclass was also submitted, but we ripped it out before commit because the handling of floating point comparisons in existing code is inconsistent and would generate corrupt indexes.) Author: Emre Hasegeli. Cosmetic changes by me Review: Andreas Karlsson
* Improve test for CONVERT() with GB18030 <-> UTF8.Tom Lane2015-05-15
| | | | | | Add a bit of coverage of high code points. Arjen Nienhuis
* Move strategy numbers to include/access/stratnum.hAlvaro Herrera2015-05-15
| | | | | | | | | | | | | | | | | | | | For upcoming BRIN opclasses, it's convenient to have strategy numbers defined in a single place. Since there's nothing appropriate, create it. The StrategyNumber typedef now lives there, as well as existing strategy numbers for B-trees (from skey.h) and R-tree-and-friends (from gist.h). skey.h is forced to include stratnum.h because of the StrategyNumber typedef, but gist.h is not; extensions that currently rely on gist.h for rtree strategy numbers might need to add a new A few .c files can stop including skey.h and/or gist.h, which is a nice side benefit. Per discussion: https://www.postgresql.org/message-id/20150514232132.GZ2523@alvh.no-ip.org Authored by Emre Hasegeli and Álvaro. (It's not clear to me why bootscanner.l has any #include lines at all.)
* SQLStandard feature T613 Sampling now SupportedSimon Riggs2015-05-15
|
* Fix uninitialized variable.Tom Lane2015-05-15
| | | | Per compiler warnings.
* Extend GB18030 encoding conversion to cover full Unicode range.Tom Lane2015-05-15
| | | | | | | | | | | | | | | | | | | | | | | | Our previous code for GB18030 <-> UTF8 conversion only covered Unicode code points up to U+FFFF, but the actual spec defines conversions for all code points up to U+10FFFF. That would be rather impractical as a lookup table, but fortunately there is a simple algorithmic conversion between the additional code points and the equivalent GB18030 byte patterns. Make use of the just-added callback facility in LocalToUtf/UtfToLocal to perform the additional conversions. Having created the infrastructure to do that, we can use the same code to map certain linearly-related subranges of the Unicode space below U+FFFF, allowing removal of the corresponding lookup table entries. This more than halves the lookup table size, which is a substantial savings; utf8_and_gb18030.so drops from nearly a megabyte to about half that. In support of doing that, replace ISO10646-GB18030.TXT with the data file gb-18030-2000.xml (retrieved from http://source.icu-project.org/repos/icu/data/trunk/charset/data/xml/ ) in which these subranges have been deleted from the simple lookup entries. Per bug #12845 from Arjen Nienhuis. The conversion code added here is based on his proposed patch, though I whacked it around rather heavily.
* TABLESAMPLE, SQL Standard and extensibleSimon Riggs2015-05-15
| | | | | | | | | | | | | | Add a TABLESAMPLE clause to SELECT statements that allows user to specify random BERNOULLI sampling or block level SYSTEM sampling. Implementation allows for extensible sampling functions to be written, using a standard API. Basic version follows SQLStandard exactly. Usable concrete use cases for the sampling API follow in later commits. Petr Jelinek Reviewed by Michael Paquier and Simon Riggs
* Silence another create_index regression test failure.Heikki Linnakangas2015-05-15
| | | | | | More platform differences in the less-significant digits in output. Per buildfarm member rover_firefly, still.
* Fix outdated src/test/mb/ tests, and add a GB18030 test.Tom Lane2015-05-15
| | | | | | | | The expected-output files for these tests were broken by the recent addition of a warning for hash indexes. Update them. Also add a test case for GB18030 encoding, similar to the other ones. This is a pretty weak test, but it's better than nothing.
* Add archive_mode='always' option.Heikki Linnakangas2015-05-15
| | | | | | | In 'always' mode, the standby independently archives all files it receives from the primary. Original patch by Fujii Masao, docs and review by me.
* Silence create_index regression test failure.Heikki Linnakangas2015-05-15
| | | | | | | | The expected output contained some floating point values which might get rounded slightly differently on different platforms. The exact output isn't very interesting in this test, so just round it. Per buildfarm member rover_firefly.
* Fix datatype confusion with the new lossy GiST distance functions.Heikki Linnakangas2015-05-15
| | | | | | | | | | | | | | | | | | | | | | | | We can only support a lossy distance function when the distance function's datatype is comparable with the original ordering operator's datatype. The distance function always returns a float8, so we are limited to float8, and float4 (by a hard-coded cast of the float8 to float4). In light of this limitation, it seems like a good idea to have a separate 'recheck' flag for the ORDER BY expressions, so that if you have a non-lossy distance function, it still works with lossy quals. There are cases like that with the build-in or contrib opclasses, but it's plausible. There was a hidden assumption that the ORDER BY values returned by GiST match the original ordering operator's return type, but there are plenty of examples where that's not true, e.g. in btree_gist and pg_trgm. As long as the distance function is not lossy, we can tolerate that and just not return the distance to the executor (or rather, always return NULL). The executor doesn't need the distances if there are no lossy results. There was another little bug: the recheck variable was not initialized before calling the distance function. That revealed the bigger issue, as the executor tried to reorder tuples that didn't need reordering, and that failed because of the datatype mismatch.
* Fix insufficiently-paranoid GB18030 encoding verifier.Tom Lane2015-05-15
| | | | | | | | | | | | | | | | | | The previous coding effectively only verified that the second byte of a multibyte character was in the expected range; moreover, it wasn't careful to make sure that the second byte even exists in the buffer before touching it. The latter seems unlikely to cause any real problems in the field (in particular, it could never be a problem with null-terminated input), but it's still a bug. Since GB18030 is not a supported backend encoding, the only thing we'd really be doing with GB18030 text is converting it to UTF8 in LocalToUtf, which would fail anyway on any invalid character for lack of a match in its lookup table. So the only user-visible consequence of this change should be that you'll get "invalid byte sequence for encoding" rather than "character has no equivalent" for malformed GB18030 input. However, impending changes to the GB18030 conversion code will require these tighter up-front checks to avoid producing bogus results.
* Support --verbose option in reindexdb.Fujii Masao2015-05-15
| | | | Sawada Masahiko, reviewed by Fabrízio Mello
* Allow GiST distance function to return merely a lower-bound.Heikki Linnakangas2015-05-15
| | | | | | | | | | | The distance function can now set *recheck = false, like index quals. The executor will then re-check the ORDER BY expressions, and use a queue to reorder the results on the fly. This makes it possible to do kNN-searches on polygons and circles, which don't store the exact value in the index, but just a bounding box. Alexander Korotkov and me
* Support VERBOSE option in REINDEX command.Fujii Masao2015-05-15
| | | | | | | | | | | | | | | | When this option is specified, a progress report is printed as each index is reindexed. Per discussion, we agreed on the following syntax for the extensibility of the options. REINDEX (flexible options) { INDEX | ... } name Sawada Masahiko. Reviewed by Robert Haas, Fabrízio Mello, Alvaro Herrera, Kyotaro Horiguchi, Jim Nasby and me. Discussion: CAD21AoA0pK3YcOZAFzMae+2fcc3oGp5zoRggDyMNg5zoaWDhdQ@mail.gmail.com
* Teach UtfToLocal/LocalToUtf to support algorithmic encoding conversions.Tom Lane2015-05-14
| | | | | | | | | | | | | | | | | | | | | Until now, these functions have only supported encoding conversions using lookup tables, which is fine as long as there's not too many code points to convert. However, GB18030 expects all 1.1 million Unicode code points to be convertible, which would require a ridiculously-sized lookup table. Fortunately, a large fraction of those conversions can be expressed through arithmetic, ie the conversions are one-to-one in certain defined ranges. To support that, provide a callback function that is used after consulting the lookup tables. (This patch doesn't actually change anything about the GB18030 conversion behavior, just provide infrastructure for fixing it.) Since this requires changing the APIs of UtfToLocal/LocalToUtf anyway, take the opportunity to rearrange their argument lists into what seems to me a saner order. And beautify the call sites by using lengthof() instead of error-prone sizeof() arithmetic. In passing, also mark all the lookup tables used by these calls "const". This moves an impressive amount of stuff into the text segment, at least on my machine, and is safer anyhow.
* Separate block sampling functionsSimon Riggs2015-05-15
| | | | | | | | Refactoring ahead of tablesample patch Requested and reviewed by Michael Paquier Petr Jelinek
* pg_upgrade: make controldata checks more consistentBruce Momjian2015-05-14
| | | | Also add missing float8_pass_by_value check.
* Add pg_settings.pending_restart columnPeter Eisentraut2015-05-14
| | | | with input from David G. Johnston, Robert Haas, Michael Paquier
* Support "expanded" objects, particularly arrays, for better performance.Tom Lane2015-05-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the ability for complex datatypes to have an in-memory representation that is different from their on-disk format. On-disk formats are typically optimized for minimal size, and in any case they can't contain pointers, so they are often not well-suited for computation. Now a datatype can invent an "expanded" in-memory format that is better suited for its operations, and then pass that around among the C functions that operate on the datatype. There are also provisions (rudimentary as yet) to allow an expanded object to be modified in-place under suitable conditions, so that operations like assignment to an element of an array need not involve copying the entire array. The initial application for this feature is arrays, but it is not hard to foresee using it for other container types like JSON, XML and hstore. I have hopes that it will be useful to PostGIS as well. In this initial implementation, a few heuristics have been hard-wired into plpgsql to improve performance for arrays that are stored in plpgsql variables. We would like to generalize those hacks so that other datatypes can obtain similar improvements, but figuring out some appropriate APIs is left as a task for future work. (The heuristics themselves are probably not optimal yet, either, as they sometimes force expansion of arrays that would be better left alone.) Preliminary performance testing shows impressive speed gains for plpgsql functions that do element-by-element access or update of large arrays. There are other cases that get a little slower, as a result of added array format conversions; but we can hope to improve anything that's annoyingly bad. In any case most applications should see a net win. Tom Lane, reviewed by Andres Freund
* Fix comment.Robert Haas2015-05-13
| | | | | | Commit 78efd5c1edb59017f06ef96773e64e6539bfbc86 overlooked this. Report by Peter Geoghegan.
* Extend abbreviated key infrastructure to datum tuplesorts.Robert Haas2015-05-13
| | | | Andrew Gierth, reviewed by Peter Geoghegan and by me.
* Fix postgres_fdw to return the right ctid value in EvalPlanQual cases.Tom Lane2015-05-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a postgres_fdw foreign table is a non-locked source relation in an UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the query selects its ctid column, the wrong value would be returned if an EvalPlanQual recheck occurred. This happened because the foreign table's result row was copied via the ROW_MARK_COPY code path, and EvalPlanQualFetchRowMarks just unconditionally set the reconstructed tuple's t_self to "invalid". To fix that, we can have EvalPlanQualFetchRowMarks copy the composite datum's t_ctid field, and be sure to initialize that along with t_self when postgres_fdw constructs a tuple to return. If we just did that much then EvalPlanQualFetchRowMarks would start returning "(0,0)" as ctid for all other ROW_MARK_COPY cases, which perhaps does not matter much, but then again maybe it might. The cause of that is that heap_form_tuple, which is the ultimate source of all composite datums, simply leaves t_ctid as zeroes in newly constructed tuples. That seems like a bad idea on general principles: a field that's really not been initialized shouldn't appear to have a valid value. So let's eat the trivial additional overhead of doing "ItemPointerSetInvalid(&(td->t_ctid))" in heap_form_tuple. This closes out our handling of Etsuro Fujita's report that tableoid and ctid weren't correctly set in postgres_fdw EvalPlanQual cases. Along the way we did a great deal of work to improve FDWs' ability to control row locking behavior; which was not wasted effort by any means, but it didn't end up being a fix for this problem because that feature would be too expensive for postgres_fdw to use all the time. Although the fix for the tableoid misbehavior was back-patched, I'm hesitant to do so here; it seems far less likely that people would care about remote ctid than tableoid, and even such a minor behavioral change as this in heap_form_tuple is perhaps best not back-patched. So commit to HEAD only, at least for the moment. Etsuro Fujita, with some adjustments by me
* Fix jsonb replace and delete on scalars and empty structuresAndrew Dunstan2015-05-13
| | | | | | | These operations now error out if attempted on scalars, and simply return the input if attempted on empty arrays or objects. Along the way we remove the unnecessary cloning of the input when it's known to be unchanged. Regression tests covering these cases are added.
* Remove useless assertion.Robert Haas2015-05-13
| | | | | Here, snapshot->xcnt is an unsigned type, so it will always be non-negative.
* PL/Python: Remove procedure cache invalidationPeter Eisentraut2015-05-12
| | | | | | | | This was added to react to changes in the pg_transform catalog, but building with CLOBBER_CACHE_ALWAYS showed that PL/Python was not prepared for having its procedure cache cleared. Since this is a marginal use case, and we don't do this for other catalogs anyway, we can postpone this to another day.
* Fix ON CONFLICT bugs that manifest when used in rules.Andres Freund2015-05-13
| | | | | | | | | | | | | | Specifically the tlist and rti of the pseudo "excluded" relation weren't properly treated by expression_tree_walker, which lead to errors when excluded was referenced inside a rule because the varnos where not properly adjusted. Similar omissions in OffsetVarNodes and expression_tree_mutator had less impact, but should obviously be fixed nonetheless. A couple tests of for ON CONFLICT UPDATE into INSERT rule bearing relations have been added. In passing I updated a couple comments.
* Fix some errors from jsonb functions patch.Andrew Dunstan2015-05-12
| | | | | The catalog version should have been bumped, and the alternative regression result file was not up to date with the name of jsonb_pretty.
* Additional functions and operators for jsonbAndrew Dunstan2015-05-12
| | | | | | | | | | | | | | | jsonb_pretty(jsonb) produces nicely indented json output. jsonb || jsonb concatenates two jsonb values. jsonb - text removes a key and its associated value from the json jsonb - int removes the designated array element jsonb - text[] removes a key and associated value or array element at the designated path jsonb_replace(jsonb,text[],jsonb) replaces the array element designated by the path or the value associated with the key designated by the path with the given value. Original work by Dmitry Dolgov, adapted and reworked for PostgreSQL core by Andrew Dunstan, reviewed and tidied up by Petr Jelinek.
* Add support for doing late row locking in FDWs.Tom Lane2015-05-12
| | | | | | | | | | | | | | | | | | | | | Previously, FDWs could only do "early row locking", that is lock a row as soon as it's fetched, even though local restriction/join conditions might discard the row later. This patch adds callbacks that allow FDWs to do late locking in the same way that it's done for regular tables. To make use of this feature, an FDW must support the "ctid" column as a unique row identifier. Currently, since ctid has to be of type TID, the feature is of limited use, though in principle it could be used by postgres_fdw. We may eventually allow FDWs to specify another data type for ctid, which would make it possible for more FDWs to use this feature. This commit does not modify postgres_fdw to use late locking. We've tested some prototype code for that, but it's not in committable shape, and besides it's quite unclear whether it actually makes sense to do late locking against a remote server. The extra round trips required are likely to outweigh any benefit from improved concurrency. Etsuro Fujita, reviewed by Ashutosh Bapat, and hacked up a lot by me
* pgbench: Don't fail during startupStephen Frost2015-05-12
| | | | | | | | | | | In pgbench, report, but ignore, any errors returned when attempting to vacuum/truncate the default tables during startup. If the tables are needed, we'll error out soon enough anyway. Per discussion with Tatsuo, David Rowley, Jim Nasby, Robert, Andres, Fujii, Fabrízio de Royes Mello, Tomas Vondra, Michael Paquier, Peter, based on a suggestion from Jeff Janes, patch from Robert, additional message wording from Tom.
* pg_basebackup -F t now succeeds with a long symlink targetAndrew Dunstan2015-05-12
|
* doc build: use unique Makefile variable to control temp installBruce Momjian2015-05-12
|
* "Fix" test_ddl_deparse regress test scheduleAlvaro Herrera2015-05-12
| | | | | | | | | MSVC is not smart enough to figure it out, so dumb down the Makefile and remove the schedule file. Also add a .gitignore file. Author: Michael Paquier
* doc: prevent SGML 'make check' from building temp installBruce Momjian2015-05-12
| | | | Report by Alvaro Herrera
* Map basebackup tablespaces using a tablespace_map fileAndrew Dunstan2015-05-12
| | | | | | | | | | | | | | | | Windows can't reliably restore symbolic links from a tar format, so instead during backup start we create a tablespace_map file, which is used by the restoring postgres to create the correct links in pg_tblspc. The backup protocol also now has an option to request this file to be included in the backup stream, and this is used by pg_basebackup when operating in tar mode. This is done on all platforms, not just Windows. This means that pg_basebackup will not not work in tar mode against 9.4 and older servers, as this protocol option isn't implemented there. Amit Kapila, reviewed by Dilip Kumar, with a little editing from me.
* Replace some appendStringInfo* calls with more appropriate variantsPeter Eisentraut2015-05-11
| | | | Author: David Rowley <dgrowleyml@gmail.com>
* Allow on-the-fly capture of DDL event detailsAlvaro Herrera2015-05-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This feature lets user code inspect and take action on DDL events. Whenever a ddl_command_end event trigger is installed, DDL actions executed are saved to a list which can be inspected during execution of a function attached to ddl_command_end. The set-returning function pg_event_trigger_ddl_commands can be used to list actions so captured; it returns data about the type of command executed, as well as the affected object. This is sufficient for many uses of this feature. For the cases where it is not, we also provide a "command" column of a new pseudo-type pg_ddl_command, which is a pointer to a C structure that can be accessed by C code. The struct contains all the info necessary to completely inspect and even reconstruct the executed command. There is no actual deparse code here; that's expected to come later. What we have is enough infrastructure that the deparsing can be done in an external extension. The intention is that we will add some deparsing code in a later release, as an in-core extension. A new test module is included. It's probably insufficient as is, but it should be sufficient as a starting point for a more complete and future-proof approach. Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick, Abhijit Menon-Sen. Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier, Craig Ringer, David Steele. Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost, Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule. Based on original work by Dimitri Fontaine, though I didn't use his code. Discussion: https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
* Allow LOCK TABLE .. ROW EXCLUSIVE MODE with INSERTStephen Frost2015-05-11
| | | | | | | | | | | | INSERT acquires RowExclusiveLock during normal operation and therefore it makes sense to allow LOCK TABLE .. ROW EXCLUSIVE MODE to be executed by users who have INSERT rights on a table (even if they don't have UPDATE or DELETE). Not back-patching this as it's a behavior change which, strictly speaking, loosens security restrictions. Per discussion with Tom and Robert (circa 2013).
* pg_upgrade: use single or double-quotes in command-line stringsBruce Momjian2015-05-11
| | | | This is platform-dependent.