aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/adt
Commit message (Collapse)AuthorAge
* Support timezone abbreviations that sometimes change.Tom Lane2014-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
* Fix bogus optimization in JSONB containment tests.Tom Lane2014-10-11
| | | | | | | | | | | | | | When determining whether one JSONB object contains another, it's okay to make a quick exit if the first object has fewer pairs than the second: because we de-duplicate keys within objects, it is impossible that the first object has all the keys the second does. However, the code was applying this rule to JSONB arrays as well, where it does *not* hold because arrays can contain duplicate entries. The test was really in the wrong place anyway; we should do it within JsonbDeepContains, where it can be applied to nested objects not only top-level ones. Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and Tom Lane.
* Split builtins.h to a new header ruleutils.hAlvaro Herrera2014-10-08
| | | | | | | The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
* Implement SKIP LOCKED for row-level locksAlvaro Herrera2014-10-07
| | | | | | | | | | | | | | | | | | This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
* Revert 95d737ff to add 'ignore_nulls'Stephen Frost2014-09-29
| | | | | | | | | | Per discussion, revert the commit which added 'ignore_nulls' to row_to_json. This capability would be better added as an independent function rather than being bolted on to row_to_json. Additionally, the implementation didn't address complex JSON objects, and so was incomplete anyway. Pointed out by Tom and discussed with Andrew and Robert.
* Change JSONB's on-disk format for improved performance.Tom Lane2014-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original design used an array of offsets into the variable-length portion of a JSONB container. However, such an array is basically uncompressible by simple compression techniques such as TOAST's LZ compressor. That's bad enough, but because the offset array is at the front, it tended to trigger the give-up-after-1KB heuristic in the TOAST code, so that the entire JSONB object was stored uncompressed; which was the root cause of bug #11109 from Larry White. To fix without losing the ability to extract a random array element in O(1) time, change this scheme so that most of the JEntry array elements hold lengths rather than offsets. With data that's compressible at all, there tend to be fewer distinct element lengths, so that there is scope for compression of the JEntry array. Every N'th entry is still an offset. To determine the length or offset of any specific element, we might have to examine up to N preceding JEntrys, but that's still O(1) so far as the total container size is concerned. Testing shows that this cost is negligible compared to other costs of accessing a JSONB field, and that the method does largely fix the incompressible-data problem. While at it, rearrange the order of elements in a JSONB object so that it's "all the keys, then all the values" not alternating keys and values. This doesn't really make much difference right at the moment, but it will allow providing a fast path for extracting individual object fields from large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the existing optimization for substring extraction from large EXTERNAL text values. Bump catversion to denote the incompatibility in on-disk format. We will need to fix pg_upgrade to disallow upgrading jsonb data stored with 9.4 betas 1 and 2. Heikki Linnakangas and Tom Lane
* Remove ill-conceived ban on zero length json object keys.Andrew Dunstan2014-09-25
| | | | | | | | | | We removed a similar ban on this in json_object recently, but the ban in datum_to_json was left, which generate4d sprutious errors in othee json generators, notable json_build_object. Along the way, add an assertion that datum_to_json is not passed a null key. All current callers comply with this rule, but the assertion will catch any possible future misbehaviour.
* Return NULL from json_object_agg if it gets no rows.Andrew Dunstan2014-09-25
| | | | | This makes it consistent with the docs and with all other builtin aggregates apart from count().
* Code review for row security.Stephen Frost2014-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Buildfarm member tick identified an issue where the policies in the relcache for a relation were were being replaced underneath a running query, leading to segfaults while processing the policies to be added to a query. Similar to how TupleDesc RuleLocks are handled, add in a equalRSDesc() function to check if the policies have actually changed and, if not, swap back the rsdesc field (using the original instead of the temporairly built one; the whole structure is swapped and then specific fields swapped back). This now passes a CLOBBER_CACHE_ALWAYS for me and should resolve the buildfarm error. In addition to addressing this, add a new chapter in Data Definition under Privileges which explains row security and provides examples of its usage, change \d to always list policies (even if row security is disabled- but note that it is disabled, or enabled with no policies), rework check_role_for_policy (it really didn't need the entire policy, but it did need to be using has_privs_of_role()), and change the field in pg_class to relrowsecurity from relhasrowsecurity, based on Heikki's suggestion. Also from Heikki, only issue SET ROW_SECURITY in pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and document --enable-row-security options for pg_dump and pg_restore. Lastly, fix a number of minor whitespace and typo issues from Heikki, Dimitri, add a missing #include, per Peter E, fix a few minor variable-assigned-but-not-used and resource leak issues from Coverity and add tab completion for role attribute bypassrls as well.
* Add a fast pre-check for equality of equal-length strings.Robert Haas2014-09-19
| | | | | | | | | Testing reveals that that doing a memcmp() before the strcoll() costs practically nothing, at least on the systems we tested, and it speeds up sorts containing many equal strings significatly. Peter Geoghegan. Review by myself and Heikki Linnakangas. Comments rewritten by me.
* Row-Level Security Policies (RLS)Stephen Frost2014-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
* Revert f68dc5d86b9f287f80f4417f5a24d876eb13771dBruce Momjian2014-09-12
| | | | Renaming will have to be more comprehensive, so I need approval.
* More formatting.c variable renaming, for clarityBruce Momjian2014-09-12
|
* Fix power_var_int() for large integer exponents.Tom Lane2014-09-11
| | | | | | | | | | | | | | | | | | | The code for raising a NUMERIC value to an integer power wasn't very careful about large powers. It got an outright wrong answer for an exponent of INT_MIN, due to failure to consider overflow of the Abs(exp) operation; which is fixable by using an unsigned rather than signed exponent value after that point. Also, even though the number of iterations of the power-computation loop is pretty limited, it's easy for the repeated squarings to result in ridiculously enormous intermediate values, which can take unreasonable amounts of time/memory to process, or even overflow the internal "weight" field and so produce a wrong answer. We can forestall misbehaviors of that sort by bailing out as soon as the weight value exceeds what will fit in int16, since then the final answer must overflow (if exp > 0) or underflow (if exp < 0) the packed numeric format. Per off-list report from Pavel Stehule. Back-patch to all supported branches.
* Add 'ignore_nulls' option to row_to_jsonStephen Frost2014-09-11
| | | | | | | | | | | | | | | Provide an option to skip NULL values in a row when generating a JSON object from that row with row_to_json. This can reduce the size of the JSON object in cases where columns are NULL without really reducing the information in the JSON object. This also makes row_to_json into a single function with default values, rather than having multiple functions. In passing, change array_to_json to also be a single function with default values (we don't add an 'ignore_nulls' option yet- it's not clear that there is a sensible use-case there, and it hasn't been asked for in any case). Pavel Stehule
* Silence compiler warning on Windows.Heikki Linnakangas2014-09-11
| | | | David Rowley.
* Implement mxid_age() to compute multi-xid ageBruce Momjian2014-09-10
| | | | Report by Josh Berkus
* Add width_bucket(anyelement, anyarray).Tom Lane2014-09-09
| | | | | | | | | | | This provides a convenient method of classifying input values into buckets that are not necessarily equal-width. It works on any sortable data type. The choice of function name is a bit debatable, perhaps, but showing that there's a relationship to the SQL standard's width_bucket() function seems more attractive than the other proposals. Petr Jelinek, reviewed by Pavel Stehule
* Allow empty content in xml typePeter Eisentraut2014-09-09
| | | | | | | | The xml type previously rejected "content" that is empty or consists only of spaces. But the SQL/XML standard allows that, so change that. The accepted values for XML "documents" are not changed. Reviewed-by: Ali Akbar <the.apaan@gmail.com>
* Rename C variables in formatting.c, for clarityBruce Momjian2014-09-05
| | | | | Also add C comments. This should help future debugging of this notorious file.
* Add min and max aggregates for inet/cidr data types.Tom Lane2014-08-28
| | | | Haribabu Kommi, reviewed by Muhammad Asif Naeem
* Allow multibyte characters as escape in SIMILAR TO and SUBSTRING.Jeff Davis2014-08-27
| | | | | | | | Previously, only a single-byte character was allowed as an escape. This patch allows it to be a multi-byte character, though it still must be a single character. Reviewed by Heikki Linnakangas and Tom Lane.
* Fix typo in b34e37bfefbed1bf9396dde18f308d8b96fd176c.Robert Haas2014-08-26
| | | | Spotted by Peter Geoghegan.
* rename macro isTempOrToastNamespace to isTempOrTempToastNamespaceBruce Momjian2014-08-25
| | | | Done for clarity
* Fix corner-case behaviors in JSON/JSONB field extraction operators.Tom Lane2014-08-22
| | | | | | | | | | | | | | | | | | | | | Cause the path extraction operators to return their lefthand input, not NULL, if the path array has no elements. This seems more consistent since the case ought to correspond to applying the simple extraction operator (->) zero times. Cause other corner cases in field/element/path extraction to return NULL rather than failing. This behavior is arguably more useful than throwing an error, since it allows an expression index using these operators to be built even when not all values in the column are suitable for the extraction being indexed. Moreover, we already had multiple inconsistencies between the path extraction operators and the simple extraction operators, as well as inconsistencies between the JSON and JSONB code paths. Adopt a uniform rule of returning NULL rather than throwing an error when the JSON input does not have a structure that permits the request to be satisfied. Back-patch to 9.4. Update the release notes to list this as a behavior change since 9.3.
* Fix core dump in jsonb #> operator, and add regression test cases.Tom Lane2014-08-20
| | | | | | | | | | | | | | | | | jsonb's #> operator segfaulted (dereferencing a null pointer) if the RHS was a zero-length array, as reported in bug #11207 from Justin Van Winkle. json's #> operator returns NULL in such cases, so for the moment let's make jsonb act likewise. Also add a bunch of regression test queries memorializing the -> and #> operators' behavior for this and other corner cases. There is a good argument for changing some of these behaviors, as they are not very consistent with each other, and throwing an error isn't necessarily a desirable behavior for operators that are likely to be used in indexes. However, everybody can agree that a core dump is the Wrong Thing, and we need test cases even if we decide to change their expected output later.
* Use ISO 8601 format for dates converted to JSON, too.Tom Lane2014-08-17
| | | | | | | | Commit f30015b6d794c15d52abbb3df3a65081fbefb1ed made this happen for timestamp and timestamptz, but it seems pretty inconsistent to not do it for simple dates as well. (In passing, I re-pgindent'd json.c.)
* Fix bogus return macros in range_overright_internal().Tom Lane2014-08-16
| | | | | | | | | PG_RETURN_BOOL() should only be used in functions following the V1 SQL function API. This coding accidentally fails to fail since letting the compiler coerce the Datum representation of bool back to plain bool does give the right answer; but that doesn't make it a good idea. Back-patch to older branches just to avoid unnecessary code divergence.
* Add sortsupport routines for text.Robert Haas2014-08-14
| | | | | | | This provides a small but worthwhile speedup when sorting text, at least in cases to which the sortsupport machinery applies. Robert Haas and Peter Geoghegan
* Clean up handling of unknown-type inputs in json_build_object and friends.Tom Lane2014-08-09
| | | | | | | | There's actually no need for any special case for unknown-type literals, since we only need to push the value through its output function and unknownout() works fine. The code that was here was completely bizarre anyway, and would fail outright in cases that should work, not to mention suffering from some copy-and-paste bugs.
* Further cleanup of JSON-specific error messages.Tom Lane2014-08-09
| | | | | | | | | | | | | | | | | | Fix an obvious typo in json_build_object()'s complaint about invalid number of arguments, and make the errhint a bit more sensible too. Per discussion about how to word the improved hint, change the few places in the documentation that refer to JSON object field names as "names" to say "keys" instead, since that's what we've said in the vast majority of places in the docs. Arguably "name" is more correct, since that's the terminology used in RFC 7159; but we're stuck with "key" in view of the naming of json_object_keys() so let's at least be self-consistent. I adjusted a few code comments to match this as well, and failed to resist the temptation to clean up some odd whitespace choices in the same area, as well as a useless duplicate PG_ARGISNULL() check. There's still quite a bit of code that uses the phrase "field name" in non-user- visible ways, so I left those usages alone.
* Improve some JSON error messages.Robert Haas2014-08-05
| | | | | | | These messages are new in 9.4, which hasn't been released yet, so back-patch to REL9_4_STABLE. Daniele Varrazzo
* Allow empty string object keys in json_object().Andrew Dunstan2014-07-22
| | | | | This makes the behaviour consistent with the json parser, other json-generating functions, and the JSON standards.
* Partial fix for dropped columns in functions returning composite.Tom Lane2014-07-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a view has a function-returning-composite in FROM, and there are some dropped columns in the underlying composite type, ruleutils.c printed junk in the column alias list for the reconstructed FROM entry. Before 9.3, this was prevented by doing get_rte_attribute_is_dropped tests while printing the column alias list; but that solution is not currently available to us for reasons I'll explain below. Instead, check for empty-string entries in the alias list, which can only exist if that column position had been dropped at the time the view was made. (The parser fills in empty strings to preserve the invariant that the aliases correspond to physical column positions.) While this is sufficient to handle the case of columns dropped before the view was made, we have still got issues with columns dropped after the view was made. In particular, the view could contain Vars that explicitly reference such columns! The dependency machinery really ought to refuse the column drop attempt in such cases, as it would do when trying to drop a table column that's explicitly referenced in views. However, we currently neglect to store dependencies on columns of composite types, and fixing that is likely to be too big to be back-patchable (not to mention that existing views in existing databases would not have the needed pg_depend entries anyway). So I'll leave that for a separate patch. Pre-9.3, ruleutils would print such Vars normally (with their original column names) even though it suppressed their entries in the RTE's column alias list. This is certainly bogus, since the printed view definition would fail to reload, but at least it didn't crash. However, as of 9.3 the printed column alias list is tightly tied to the names printed for Vars; so we can't treat columns as dropped for one purpose and not dropped for the other. This is why we can't just put back the get_rte_attribute_is_dropped test: it results in an assertion failure if the view in fact contains any Vars referencing the dropped column. Once we've got dependencies preventing such cases, we'll probably want to do it that way instead of relying on the empty-string test used here. This fix turned up a very ancient bug in outfuncs/readfuncs, namely that T_String nodes containing empty strings were not dumped/reloaded correctly: the node was printed as "<>" which is read as a string value of <>. Since (per SQL) we disallow empty-string identifiers, such nodes don't occur normally, which is why we'd not noticed. (Such nodes aren't used for literal constants, just identifiers.) Per report from Marc Schablewski. Back-patch to 9.3 which is where the rule printing behavior changed. The dangling-variable case is broken all the way back, but that's not what his complaint is about.
* Fix bugs in SP-GiST search with range type's -|- (adjacent) operator.Heikki Linnakangas2014-07-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The consistent function contained several bugs: * The "if (which2) { ... }" block was broken. It compared the argument's lower bound against centroid's upper bound, while it was supposed to compare the argument's upper bound against the centroid's lower bound (the comment was correct, code was wrong). Also, it cleared bits in the "which1" variable, while it was supposed to clear bits in "which2". * If the argument's upper bound was equal to the centroid's lower bound, we descended to both halves (= all quadrants). That's unnecessary, searching the right quadrants is sufficient. This didn't lead to incorrect query results, but was clearly wrong, and slowed down queries unnecessarily. * In the case that argument's lower bound is adjacent to the centroid's upper bound, we also don't need to visit all quadrants. Per similar reasoning as previous point. * The code where we compare the previous centroid with the current centroid should match the code where we compare the current centroid with the argument. The point of that code is to redo the calculation done in the previous level, to see if we were supposed to traverse left or right (or up or down), and if we actually did. If we moved in the different direction, then we know there are no matches for bound. Refactor the code and adds comments to make it more readable and easier to reason about. Backpatch to 9.3 where SP-GiST support for range types was introduced.
* Add missing serial commasPeter Eisentraut2014-07-15
| | | | | Also update one place where the wal_level "logical" was not added to an error message.
* Fix error hint style.Robert Haas2014-07-09
| | | | Mistake caught by Tom Lane.
* Improve error messages for bytea decoding failures.Robert Haas2014-07-09
| | | | Craig Ringer
* Consistently pass an "unsigned char" to ctype.h functions.Noah Misch2014-07-06
| | | | | | The isxdigit() calls relied on undefined behavior. The isascii() call was well-defined, but our prevailing style is to include the cast. Back-patch to 9.4, where the isxdigit() calls were introduced.
* Don't cache per-group context across the whole query in orderedsetaggs.c.Tom Lane2014-07-03
| | | | | | | | | | | Although nodeAgg.c currently uses the same per-group memory context for all groups of a query, that might change in future. Avoid assuming it. This costs us an extra AggCheckCallContext() call per group, but that's pretty cheap and is probably good from a safety standpoint anyway. Back-patch to 9.4 in case any third-party code copies this logic. Andrew Gierth
* Redesign API presented by nodeAgg.c for ordered-set and similar aggregates.Tom Lane2014-07-03
| | | | | | | | | | | | | The previous design exposed the input and output ExprContexts of the Agg plan node, but work on grouping sets has suggested that we'll regret doing that. Instead provide more narrowly-defined APIs that can be implemented in multiple ways, namely a way to get a short-term memory context and a way to register an aggregate shutdown callback. Back-patch to 9.4 where the bad APIs were introduced, since we don't want third-party code using these APIs and then having to change in 9.5. Andrew Gierth
* Remove use_json_as_text options from json_to_record/json_populate_record.Tom Lane2014-06-29
| | | | | | | | | | | | | | | | The "false" case was really quite useless since all it did was to throw an error; a definition not helped in the least by making it the default. Instead let's just have the "true" case, which emits nested objects and arrays in JSON syntax. We might later want to provide the ability to emit sub-objects in Postgres record or array syntax, but we'd be best off to drive that off a check of the target field datatype, not a separate argument. For the functions newly added in 9.4, we can just remove the flag arguments outright. We can't do that for json_populate_record[set], which already existed in 9.3, but we can ignore the argument and always behave as if it were "true". It helps that the flag arguments were optional and not documented in any useful fashion anyway.
* Rationalize error messages within jsonfuncs.c.Tom Lane2014-06-25
| | | | | | | | | | | | | | | | I noticed that the functions in jsonfuncs.c sometimes printed error messages that claimed I'd called some other function. Investigation showed that this was from repurposing code into "worker" functions without taking much care as to whether it would mention the right SQL-level function if it threw an error. Moreover, there was a weird mismash of messages that contained a fixed function name, messages that used %s for a function name, and messages that constructed a function name out of spare parts, like "json%s_populate_record" (which, quite aside from being ugly as sin, wasn't even sufficient to cover all the cases). This would put an undue burden on our long-suffering translators. Standardize on inserting the SQL function name with %s so as to reduce the number of translatable strings, and pass function names around as needed to make sure we can report the right one. Fix up some gratuitous variations in wording, too.
* Cosmetic improvements in jsonfuncs.c.Tom Lane2014-06-25
| | | | | | Re-pgindent, remove a lot of random vertical whitespace, remove useless (if not counterproductive) inline markings, get rid of unnecessary zero-padding of strings for hashtable searches. No functional changes.
* Fix handling of nested JSON objects in json_populate_recordset and friends.Tom Lane2014-06-24
| | | | | | | | | | | | | | | | | | | | | populate_recordset_object_start() improperly created a new hash table (overwriting the link to the existing one) if called at nest levels greater than one. This resulted in previous fields not appearing in the final output, as reported by Matti Hameister in bug #10728. In 9.4 the problem also affects json_to_recordset. This perhaps missed detection earlier because the default behavior is to throw an error for nested objects: you have to pass use_json_as_text = true to see the problem. In addition, fix query-lifespan leakage of the hashtable created by json_populate_record(). This is pretty much the same problem recently fixed in dblink: creating an intended-to-be-temporary context underneath the executor's per-tuple context isn't enough to make it go away at the end of the tuple cycle, because MemoryContextReset is not MemoryContextResetAndDeleteChildren. Michael Paquier and Tom Lane
* Remove unnecessary check for jbvBinary in convertJsonbValue.Andrew Dunstan2014-06-18
| | | | | | | The check was confusing and is a condition that should never in fact happen. Per gripe from Dmitry Dolgov.
* Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...Tom Lane2014-06-18
| | | | | | | | | | | | | | | | This SQL-standard feature allows a sub-SELECT yielding multiple columns (but only one row) to be used to compute the new values of several columns to be updated. While the same results can be had with an independent sub-SELECT per column, such a workaround can require a great deal of duplicated computation. The standard actually says that the source for a multi-column assignment could be any row-valued expression. The implementation used here is tightly tied to our existing sub-SELECT support and can't handle other cases; the Bison grammar would have some issues with them too. However, I don't feel too bad about this since other cases can be converted into sub-SELECTs. For instance, "SET (a,b,c) = row_valued_function(x)" could be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
* Fix typosAlvaro Herrera2014-06-12
|
* Add btree and hash opclasses for pg_lsn.Tom Lane2014-06-04
| | | | | | | | | | | | This is needed to allow ORDER BY, DISTINCT, etc to work as expected for pg_lsn values. We had previously decided to put this off for 9.5, but in view of commit eeca4cd35e284c72b2ea1b4494e64e7738896e81 there's no reason to avoid a catversion bump for 9.4beta2, and this does make a pretty significant usability difference for pg_lsn. Michael Paquier, with fixes from Andres Freund and Tom Lane
* Use EncodeDateTime instead of to_char to render JSON timestamps.Andrew Dunstan2014-06-03
| | | | | | | | | | Per gripe from Peter Eisentraut and Tom Lane. The output is slightly different, but still ISO 8601 compliant: to_char doesn't output the minutes when time zone offset is an integer number of hours, while EncodeDateTime outputs ":00". The code is slightly adapted from code in xml.c