aboutsummaryrefslogtreecommitdiff
path: root/src/backend
Commit message (Collapse)AuthorAge
* Support timezone abbreviations that sometimes change.Tom Lane2014-10-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, PG has assumed that any given timezone abbreviation (such as "EDT") represents a constant GMT offset in the usage of any particular region; we had a way to configure what that offset was, but not for it to be changeable over time. But, as with most things horological, this view of the world is too simplistic: there are numerous regions that have at one time or another switched to a different GMT offset but kept using the same timezone abbreviation. Almost the entire Russian Federation did that a few years ago, and later this month they're going to do it again. And there are similar examples all over the world. To cope with this, invent the notion of a "dynamic timezone abbreviation", which is one that is referenced to a particular underlying timezone (as defined in the IANA timezone database) and means whatever it currently means in that zone. For zones that use or have used daylight-savings time, the standard and DST abbreviations continue to have the property that you can specify standard or DST time and get that time offset whether or not DST was theoretically in effect at the time. However, the abbreviations mean what they meant at the time in question (or most recently before that time) rather than being absolutely fixed. The standard abbreviation-list files have been changed to use this behavior for abbreviations that have actually varied in meaning since 1970. The old simple-numeric definitions are kept for abbreviations that have not changed, since they are a bit faster to resolve. While this is clearly a new feature, it seems necessary to back-patch it into all active branches, because otherwise use of Russian zone abbreviations is going to become even more problematic than it already was. This change supersedes the changes in commit 513d06ded et al to modify the fixed meanings of the Russian abbreviations; since we've not shipped that yet, this will avoid an undesirably incompatible (not to mention incorrect) change in behavior for timestamps between 2011 and 2014. This patch makes some cosmetic changes in ecpglib to keep its usage of datetime lookup tables as similar as possible to the backend code, but doesn't do anything about the increasingly obsolete set of timezone abbreviation definitions that are hard-wired into ecpglib. Whatever we do about that will likely not be appropriate material for back-patching. Also, a potential free() of a garbage pointer after an out-of-memory failure in ecpglib has been fixed. This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that caused it to produce unexpected results near a timezone transition, if both the "before" and "after" states are marked as standard time. We'd only ever thought about or tested transitions between standard and DST time, but that's not what's happening when a zone simply redefines their base GMT offset. In passing, update the SGML documentation to refer to the Olson/zoneinfo/ zic timezone database as the "IANA" database, since it's now being maintained under the auspices of IANA.
* Print planning time only in EXPLAIN ANALYZE, not plain EXPLAIN.Tom Lane2014-10-15
| | | | | | | | | | | | We've gotten enough push-back on that change to make it clear that it wasn't an especially good idea to do it like that. Revert plain EXPLAIN to its previous behavior, but keep the extra output in EXPLAIN ANALYZE. Per discussion. Internally, I set this up as a separate flag ExplainState.summary that controls printing of planning time and execution time. For now it's just copied from the ANALYZE option, but we could consider exposing it to users.
* Fix deadlock with LWLockAcquireWithVar and LWLockWaitForVar.Heikki Linnakangas2014-10-14
| | | | | | | | | | | | | LWLockRelease should release all backends waiting with LWLockWaitForVar, even when another backend has already been woken up to acquire the lock, i.e. when releaseOK is false. LWLockWaitForVar can return as soon as the protected value changes, even if the other backend will acquire the lock. Fix that by resetting releaseOK to true in LWLockWaitForVar, whenever adding itself to the wait queue. This should fix the bug reported by MauMau, where the system occasionally hangs when there is a lot of concurrent WAL activity and a checkpoint. Backpatch to 9.4, where this code was added.
* C comments: adjust execTuples.c for new structureBruce Momjian2014-10-13
| | | | Report by Peter Geoghegan
* Consistently use NULL for invalid GUC unit stringsBruce Momjian2014-10-13
| | | | Patch by Euler Taveira
* Increase number of hash join buckets for underestimate.Kevin Grittner2014-10-13
| | | | | | | | | | | | | | | | | | | | | | | | | | If we expect batching at the very beginning, we size nbuckets for "full work_mem" (see how many tuples we can get into work_mem, while not breaking NTUP_PER_BUCKET threshold). If we expect to be fine without batching, we start with the 'right' nbuckets and track the optimal nbuckets as we go (without actually resizing the hash table). Once we hit work_mem (considering the optimal nbuckets value), we keep the value. At the end of the first batch, we check whether (nbuckets != nbuckets_optimal) and resize the hash table if needed. Also, we keep this value for all batches (it's OK because it assumes full work_mem, and it makes the batchno evaluation trivial). So the resize happens only once. There could be cases where it would improve performance to allow the NTUP_PER_BUCKET threshold to be exceeded to keep everything in one batch rather than spilling to a second batch, but attempts to generate such a case have so far been unsuccessful; that issue may be addressed with a follow-on patch after further investigation. Tomas Vondra with minor format and comment cleanup by me Reviewed by Robert Haas, Heikki Linnakangas, and Kevin Grittner
* Message improvementsPeter Eisentraut2014-10-12
|
* Fix bogus optimization in JSONB containment tests.Tom Lane2014-10-11
| | | | | | | | | | | | | | When determining whether one JSONB object contains another, it's okay to make a quick exit if the first object has fewer pairs than the second: because we de-duplicate keys within objects, it is impossible that the first object has all the keys the second does. However, the code was applying this rule to JSONB arrays as well, where it does *not* hold because arrays can contain duplicate entries. The test was really in the wrong place anyway; we should do it within JsonbDeepContains, where it can be applied to nested objects not only top-level ones. Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and Tom Lane.
* Split builtins.h to a new header ruleutils.hAlvaro Herrera2014-10-08
| | | | | | | The new header contains many prototypes for functions in ruleutils.c that are not exposed to the SQL level. Reviewed by Andres Freund and Michael Paquier.
* Extend shm_mq API with new functions shm_mq_sendv, shm_mq_set_handle.Robert Haas2014-10-08
| | | | | | | | | | | | | | | | | | | | shm_mq_sendv sends a message to the queue assembled from multiple locations. This is expected to be used by forthcoming patches to allow frontend/backend protocol messages to be sent via shm_mq, but might be useful for other purposes as well. shm_mq_set_handle associates a BackgroundWorkerHandle with an already-existing shm_mq_handle. This solves a timing problem when creating a shm_mq to communicate with a newly-launched background worker: if you attach to the queue first, and the background worker fails to start, you might block forever trying to do I/O on the queue; but if you start the background worker first, but then die before attaching to the queue, the background worrker might block forever trying to do I/O on the queue. This lets you attach before starting the worker (so that the worker is protected) and then associate the BackgroundWorkerHandle later (so that you are also protected). Patch by me, reviewed by Stephen Frost.
* Implement SKIP LOCKED for row-level locksAlvaro Herrera2014-10-07
| | | | | | | | | | | | | | | | | | This clause changes the behavior of SELECT locking clauses in the presence of locked rows: instead of causing a process to block waiting for the locks held by other processes (or raise an error, with NOWAIT), SKIP LOCKED makes the new reader skip over such rows. While this is not appropriate behavior for general purposes, there are some cases in which it is useful, such as queue-like tables. Catalog version bumped because this patch changes the representation of stored rules. Reviewed by Craig Ringer (based on a previous attempt at an implementation by Simon Riggs, who also provided input on the syntax used in the current patch), David Rowley, and Álvaro Herrera. Author: Thomas Munro
* Fix typo in elog message.Robert Haas2014-10-07
|
* Translation updatesPeter Eisentraut2014-10-05
|
* Eliminate one background-worker-related flag variable.Robert Haas2014-10-04
| | | | | | | | | | | | | | | | | | | | | Teach sigusr1_handler() to use the same test for whether a worker might need to be started as ServerLoop(). Aside from being perhaps a bit simpler, this prevents a potentially-unbounded delay when starting a background worker. On some platforms, select() doesn't return when interrupted by a signal, but is instead restarted, including a reset of the timeout to the originally-requested value. If signals arrive often enough, but no connection requests arrive, sigusr1_handler() will be executed repeatedly, but the body of ServerLoop() won't be reached. This change ensures that, even in that case, background workers will eventually get launched. This is far from a perfect fix; really, we need select() to return control to ServerLoop() after an interrupt, either via the self-pipe trick or some other mechanism. But that's going to require more work and discussion, so let's do this for now to at least mitigate the damage. Per investigation of test_shm_mq failures on buildfarm member anole.
* Update time zone data files to tzdata release 2014h.Tom Lane2014-10-04
| | | | | | | | | | | | | | | | | | | | | | Most zones in the Russian Federation are subtracting one or two hours as of 2014-10-26. Update the meanings of the abbreviations IRKT, KRAT, MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match. The IANA timezone database has adopted abbreviations of the form AxST/AxDT for all Australian time zones, reflecting what they believe to be current majority practice Down Under. These names do not conflict with usage elsewhere (other than ACST for Acre Summer Time, which has been in disuse since 1994). Accordingly, adopt these names into our "Default" timezone abbreviation set. The "Australia" abbreviation set now contains only CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical usage. Note that SAST has also been changed to be South Africa Standard Time in the "Default" abbreviation set. Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi), and use WSST/WSDT for western Samoa. Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk), and numerous corrections for historical time zone data.
* Fix CreatePolicy, pg_dump -v; psql and doc updatesStephen Frost2014-10-03
| | | | | | | | | | | | | | | | | | | | Peter G pointed out that valgrind was, rightfully, complaining about CreatePolicy() ending up copying beyond the end of the parsed policy name. Name is a fixed-size type and we need to use namein (through DirectFunctionCall1()) to flush out the entire array before we pass it down to heap_form_tuple. Michael Paquier pointed out that pg_dump --verbose was missing a newline and Fabrízio de Royes Mello further pointed out that the schema was also missing from the messages, so fix those also. Also, based on an off-list comment from Kevin, rework the psql \d output to facilitate copy/pasting into a new CREATE or ALTER POLICY command. Lastly, improve the pg_policies view and update the documentation for it, along with a few other minor doc corrections based on an off-list discussion with Adam Brightwell.
* Don't balance vacuum cost delay when per-table settings are in effectAlvaro Herrera2014-10-03
| | | | | | | | | | | | | | | | | | | | | | When there are cost-delay-related storage options set for a table, trying to make that table participate in the autovacuum cost-limit balancing algorithm produces undesirable results: instead of using the configured values, the global values are always used, as illustrated by Mark Kirkwood in http://www.postgresql.org/message-id/52FACF15.8020507@catalyst.net.nz Since the mechanism is already complicated, just disable it for those cases rather than trying to make it cope. There are undesirable side-effects from this too, namely that the total I/O impact on the system will be higher whenever such tables are vacuumed. However, this is seen as less harmful than slowing down vacuum, because that would cause bloat to accumulate. Anyway, in the new system it is possible to tweak options to get the precise behavior one wants, whereas with the previous system one was simply hosed. This has been broken forever, so backpatch to all supported branches. This might affect systems where cost_limit and cost_delay have been set for individual tables.
* Fix typos in comments.Robert Haas2014-10-03
| | | | Etsuro Fujita
* Check for GiST index tuples that don't fit on a page.Heikki Linnakangas2014-10-03
| | | | | | | | | The page splitting code would go into infinite recursion if you try to insert an index tuple that doesn't fit even on an empty page. Per analysis and suggested fix by Andrew Gierth. Fixes bug #11555, reported by Bryan Seitz (analysis happened over IRC). Backpatch to all supported versions.
* Increase the number of buffer mapping partitions to 128.Robert Haas2014-10-02
| | | | | | | | | Testing by Amit Kapila, Andres Freund, and myself, with and without other patches that also aim to improve scalability, seems to indicate that this change is a significant win over the current value and over smaller values such as 64. It's not clear how high we can push this value before it starts to have negative side-effects elsewhere, but going this far looks OK.
* Fix some more problems with nested append relations.Tom Lane2014-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of commit a87c72915 (which later got backpatched as far as 9.1), we're explicitly supporting the notion that append relations can be nested; this can occur when UNION ALL constructs are nested, or when a UNION ALL contains a table with inheritance children. Bug #11457 from Nelson Page, as well as an earlier report from Elvis Pranskevichus, showed that there were still nasty bugs associated with such cases: in particular the EquivalenceClass mechanism could try to generate "join" clauses connecting an appendrel child to some grandparent appendrel, which would result in assertion failures or bogus plans. Upon investigation I concluded that all current callers of find_childrel_appendrelinfo() need to be fixed to explicitly consider multiple levels of parent appendrels. The most complex fix was in processing of "broken" EquivalenceClasses, which are ECs for which we have been unable to generate all the derived equality clauses we would like to because of missing cross-type equality operators in the underlying btree operator family. That code path is more or less entirely untested by the regression tests to date, because no standard opfamilies have such holes in them. So I wrote a new regression test script to try to exercise it a bit, which turned out to be quite a worthwhile activity as it exposed existing bugs in all supported branches. The present patch is essentially the same as far back as 9.2, which is where parameterized paths were introduced. In 9.0 and 9.1, we only need to back-patch a small fragment of commit 5b7b5518d, which fixes failure to propagate out the original WHERE clauses when a broken EC contains constant members. (The regression test case results show that these older branches are noticeably stupider than 9.2+ in terms of the quality of the plans generated; but we don't really care about plan quality in such cases, only that the plan not be outright wrong. A more invasive fix in the older branches would not be a good idea anyway from a plan-stability standpoint.)
* Remove num_xloginsert_locks GUC, replace with a #defineHeikki Linnakangas2014-10-01
| | | | | | | I left the GUC in place for the beta period, so that people could experiment with different values. No-one's come up with any data that a different value would be better under some circumstances, so rather than try to document to users what the GUC, let's just hard-code the current value, 8.
* Block signals while computing the sleep time in postmaster's main loop.Andres Freund2014-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | DetermineSleepTime() was previously called without blocked signals. That's not good, because it allows signal handlers to interrupt its workings. DetermineSleepTime() was added in 9.3 with the addition of background workers (da07a1e856511), where it only read from BackgroundWorkerList. Since 9.4, where dynamic background workers were added (7f7485a0cde), the list is also manipulated in DetermineSleepTime(). That's bad because the list now can be persistently corrupted if modified by both a signal handler and DetermineSleepTime(). This was discovered during the investigation of hangs on buildfarm member anole. It's unclear whether this bug is the source of these hangs or not, but it's worth fixing either way. I have confirmed that it can cause crashes. It luckily looks like this only can cause problems when bgworkers are actively used. Discussion: 20140929193733.GB14400@awork2.anarazel.de Backpatch to 9.3 where background workers were introduced.
* Improve documentation about binary/textual output mode for output plugins.Andres Freund2014-10-01
| | | | | | | | | | | Also improve related error message as it contributed to the confusion. Discussion: CAB7nPqQrqFzjqCjxu4GZzTrD9kpj6HMn9G5aOOMwt1WZ8NfqeA@mail.gmail.com, CAB7nPqQXc_+g95zWnqaa=mVQ4d3BVRs6T41frcEYi2ocUrR3+A@mail.gmail.com Per discussion between Michael Paquier, Robert Haas and Andres Freund Backpatch to 9.4 where logical decoding was introduced.
* Rename CACHE_LINE_SIZE to PG_CACHE_LINE_SIZE.Andres Freund2014-10-01
| | | | | | | | | | | | | | | As noted in http://bugs.debian.org/763098 there is a conflict between postgres' definition of CACHE_LINE_SIZE and the definition by various *bsd platforms. It's debatable who has the right to define such a name, but postgres' use was only introduced in 375d8526f290 (9.4), so it seems like a good idea to rename it. Discussion: 20140930195756.GC27407@msg.df7cb.de Per complaint of Christoph Berg in the above email, although he's not the original bug reporter. Backpatch to 9.4 where the define was introduced.
* Revert 95d737ff to add 'ignore_nulls'Stephen Frost2014-09-29
| | | | | | | | | | Per discussion, revert the commit which added 'ignore_nulls' to row_to_json. This capability would be better added as an independent function rather than being bolted on to row_to_json. Additionally, the implementation didn't address complex JSON objects, and so was incomplete anyway. Pointed out by Tom and discussed with Andrew and Robert.
* Change JSONB's on-disk format for improved performance.Tom Lane2014-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original design used an array of offsets into the variable-length portion of a JSONB container. However, such an array is basically uncompressible by simple compression techniques such as TOAST's LZ compressor. That's bad enough, but because the offset array is at the front, it tended to trigger the give-up-after-1KB heuristic in the TOAST code, so that the entire JSONB object was stored uncompressed; which was the root cause of bug #11109 from Larry White. To fix without losing the ability to extract a random array element in O(1) time, change this scheme so that most of the JEntry array elements hold lengths rather than offsets. With data that's compressible at all, there tend to be fewer distinct element lengths, so that there is scope for compression of the JEntry array. Every N'th entry is still an offset. To determine the length or offset of any specific element, we might have to examine up to N preceding JEntrys, but that's still O(1) so far as the total container size is concerned. Testing shows that this cost is negligible compared to other costs of accessing a JSONB field, and that the method does largely fix the incompressible-data problem. While at it, rearrange the order of elements in a JSONB object so that it's "all the keys, then all the values" not alternating keys and values. This doesn't really make much difference right at the moment, but it will allow providing a fast path for extracting individual object fields from large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the existing optimization for substring extraction from large EXTERNAL text values. Bump catversion to denote the incompatibility in on-disk format. We will need to fix pg_upgrade to disallow upgrading jsonb data stored with 9.4 betas 1 and 2. Heikki Linnakangas and Tom Lane
* Fix relcache for policies, and doc updatesStephen Frost2014-09-26
| | | | | | | | | | | | | | | | | | Andres pointed out that there was an extra ';' in equalPolicies, which made me realize that my prior testing with CLOBBER_CACHE_ALWAYS was insufficient (it didn't always catch the issue, just most of the time). Thanks to that, a different issue was discovered, specifically in equalRSDescs. This change corrects eqaulRSDescs to return 'true' once all policies have been confirmed logically identical. After stepping through both functions to ensure correct behavior, I ran this for about 12 hours of CLOBBER_CACHE_ALWAYS runs of the regression tests with no failures. In addition, correct a few typos in the documentation which were pointed out by Thom Brown (thanks!) and improve the policy documentation further by adding a flushed out usage example based on a unix passwd file. Lastly, clean up a few comments in the regression tests and pg_dump.h.
* Fix whitespacePeter Eisentraut2014-09-26
|
* Add a basic atomic ops API abstracting away platform/architecture details.Andres Freund2014-09-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several upcoming performance/scalability improvements require atomic operations. This new API avoids the need to splatter compiler and architecture dependent code over all the locations employing atomic ops. For several of the potential usages it'd be problematic to maintain both, a atomics using implementation and one using spinlocks or similar. In all likelihood one of the implementations would not get tested regularly under concurrency. To avoid that scenario the new API provides a automatic fallback of atomic operations to spinlocks. All properties of atomic operations are maintained. This fallback - obviously - isn't as fast as just using atomic ops, but it's not bad either. For one of the future users the atomics ontop spinlocks implementation was actually slightly faster than the old purely spinlock using implementation. That's important because it reduces the fear of regressing older platforms when improving the scalability for new ones. The API, loosely modeled after the C11 atomics support, currently provides 'atomic flags' and 32 bit unsigned integers. If the platform efficiently supports atomic 64 bit unsigned integers those are also provided. To implement atomics support for a platform/architecture/compiler for a type of atomics 32bit compare and exchange needs to be implemented. If available and more efficient native support for flags, 32 bit atomic addition, and corresponding 64 bit operations may also be provided. Additional useful atomic operations are implemented generically ontop of these. The implementation for various versions of gcc, msvc and sun studio have been tested. Additional existing stub implementations for * Intel icc * HUPX acc * IBM xlc are included but have never been tested. These will likely require fixes based on buildfarm and user feedback. As atomic operations also require barriers for some operations the existing barrier support has been moved into the atomics code. Author: Andres Freund with contributions from Oskari Saarenmaa Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com, 20131015123303.GH5300@awork2.anarazel.de, 20131028205522.GI20248@awork2.anarazel.de
* Remove ill-conceived ban on zero length json object keys.Andrew Dunstan2014-09-25
| | | | | | | | | | We removed a similar ban on this in json_object recently, but the ban in datum_to_json was left, which generate4d sprutious errors in othee json generators, notable json_build_object. Along the way, add an assertion that datum_to_json is not passed a null key. All current callers comply with this rule, but the assertion will catch any possible future misbehaviour.
* Change locking regimen around buffer replacement.Robert Haas2014-09-25
| | | | | | | | | | | | | | | | | | | | | Previously, we used an lwlock that was held from the time we began seeking a candidate buffer until the time when we found and pinned one, which is disastrous for concurrency. Instead, use a spinlock which is held just long enough to pop the freelist or advance the clock sweep hand, and then released. If we need to advance the clock sweep further, we reacquire the spinlock once per buffer. This represents a significant increase in atomic operations around buffer eviction, but it still wins on many workloads. On others, it may result in no gain, or even cause a regression, unless the number of buffer mapping locks is also increased. However, that seems like material for a separate commit. We may also need to consider other methods of mitigating contention on this spinlock, such as splitting it into multiple locks or jumping the clock sweep hand more than one buffer at a time, but those, too, seem like separate improvements. Patch by me, inspired by a much larger patch from Amit Kapila. Reviewed by Andres Freund.
* Fix VPATH builds of the replication parser from git for some !gcc compilers.Andres Freund2014-09-25
| | | | | | | | | | | | | Some compilers don't automatically search the current directory for included files. 9cc2c182fc2 fixed that for builds from tarballs by adding an include to the source directory. But that doesn't work when the scanner is generated in the VPATH directory. Use the same search path as the other parsers in the tree. One compiler that definitely was affected is solaris' sun cc. Backpatch to 9.1 which introduced using an actual parser for replication commands.
* Return NULL from json_object_agg if it gets no rows.Andrew Dunstan2014-09-25
| | | | | This makes it consistent with the docs and with all other builtin aggregates apart from count().
* Copy-editing of row securityStephen Frost2014-09-24
| | | | | | Address a few typos in the row security update, pointed out off-list by Adam Brightwell. Also include 'ALL' in the list of commands supported, for completeness.
* Code review for row security.Stephen Frost2014-09-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Buildfarm member tick identified an issue where the policies in the relcache for a relation were were being replaced underneath a running query, leading to segfaults while processing the policies to be added to a query. Similar to how TupleDesc RuleLocks are handled, add in a equalRSDesc() function to check if the policies have actually changed and, if not, swap back the rsdesc field (using the original instead of the temporairly built one; the whole structure is swapped and then specific fields swapped back). This now passes a CLOBBER_CACHE_ALWAYS for me and should resolve the buildfarm error. In addition to addressing this, add a new chapter in Data Definition under Privileges which explains row security and provides examples of its usage, change \d to always list policies (even if row security is disabled- but note that it is disabled, or enabled with no policies), rework check_role_for_policy (it really didn't need the entire policy, but it did need to be using has_privs_of_role()), and change the field in pg_class to relrowsecurity from relhasrowsecurity, based on Heikki's suggestion. Also from Heikki, only issue SET ROW_SECURITY in pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and document --enable-row-security options for pg_dump and pg_restore. Lastly, fix a number of minor whitespace and typo issues from Heikki, Dimitri, add a missing #include, per Peter E, fix a few minor variable-assigned-but-not-used and resource leak issues from Coverity and add tab completion for role attribute bypassrls as well.
* Fix bogus variable-mangling in security_barrier_replace_vars().Tom Lane2014-09-24
| | | | | | | | | | | | | | This function created new Vars with varno different from varnoold, which is a condition that should never prevail before setrefs.c does the final variable-renumbering pass. The created Vars could not be seen as equal() to normal Vars, which among other things broke equivalence-class processing for them. The consequences of this were indeed visible in the regression tests, in the form of failure to propagate constants as one would expect. I stumbled across it while poking at bug #11457 --- after intentionally disabling join equivalence processing, the security-barrier regression tests started falling over with fun errors like "could not find pathkey item to sort", because of failure to match the corrupted Vars to normal ones.
* Fix incorrect search for "x?" style matches in creviterdissect().Tom Lane2014-09-23
| | | | | | | | | | | | | | | | | | | | When the number of allowed iterations is limited (either a "?" quantifier or a bound expression), the last sub-match has to reach to the end of the target string. The previous coding here first tried the shortest possible match (one character, usually) and then gave up and back-tracked if that didn't work, typically leading to failure to match overall, as shown in bug #11478 from Christoph Berg. The minimum change to fix that would be to not decrement k before "goto backtrack"; but that would be a pretty stupid solution, because we'd laboriously try each possible sub-match length before finally discovering that only ending at the end can work. Instead, force the sub-match endpoint limit up to the end for even the first shortest() call if we cannot have any more sub-matches after this one. Bug introduced in my rewrite that added the iterdissect logic, commit 173e29aa5deefd9e71c183583ba37805c8102a72. The shortest-first search code was too closely modeled on the longest-first code, which hasn't got this issue since it tries a match reaching to the end to start with anyway. Back-patch to all affected branches.
* Log ALTER SYSTEM statements as DDLStephen Frost2014-09-22
| | | | | | | | | Per discussion in bug #11350, log ALTER SYSTEM commands at the log_statement=ddl level, rather than at the log_statement=all level. Pointed out by Tomonari Katsumata. Back-patch to 9.4 where ALTER SYSTEM was introduced.
* Process withCheckOption exprs in setrefs.cStephen Frost2014-09-22
| | | | | | | | | | | | | | While withCheckOption exprs had been handled in many cases by happenstance, they need to be handled during set_plan_references and more specifically down in set_plan_refs for ModifyTable plan nodes. This is to ensure that the opfuncid's are set for operators referenced in the withCheckOption exprs. Identified as an issue by Thom Brown Patch by Dean Rasheed Back-patch to 9.4, where withCheckOption was introduced.
* Remove most volatile qualifiers from xlog.cAndres Freund2014-09-22
| | | | | | | | | | | | | | For the reason outlined in df4077cda2e also remove volatile qualifiers from xlog.c. Some of these uses of volatile have been added after noticing problems back when spinlocks didn't imply compiler barriers. So they are a good test - in fact removing the volatiles breaks when done without the barriers in spinlocks present. Several uses of volatile remain where they are explicitly used to access shared memory without locks. These locations are ok with slightly out of date data, but removing the volatile might lead to the variables never being reread from memory. These uses could also be replaced by barriers, but that's a separate change of doubtful value.
* Remove volatile qualifiers from lwlock.c.Robert Haas2014-09-22
| | | | | | | Now that spinlocks (hopefully!) act as compiler barriers, as of commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, this should be safe. This serves as a demonstration of the new coding style, and may be optimized better on some machines as well.
* Fix compiler warning.Robert Haas2014-09-22
| | | | It is meaningless to declare a pass-by-value return type const.
* Fix mishandling of CreateEventTrigStmt's eventname field.Robert Haas2014-09-22
| | | | | | It's a string, not a scalar. Petr Jelinek
* Remove postgres --help blurb about the removed -A option.Andres Freund2014-09-22
| | | | | | | I missed this in 3bdcf6a5a755503. Noticed by Merlin Moncure Discussion: CAHyXU0yC7uPeeVzQROwtnrOP9dxTEUPYjB0og4qUnbipMEV57w@mail.gmail.com
* Improve code around the recently added rm_identify rmgr callback.Andres Freund2014-09-22
| | | | | | | | | | | | | | | There are four weaknesses in728f152e07f998d2cb4fe5f24ec8da2c3bda98f2: * append_init() in heapdesc.c was ugly and required that rm_identify return values are only valid till the next call. Instead just add a couple more switch() cases for the INIT_PAGE cases. Now the returned value will always be valid. * a couple rm_identify() callbacks missed masking xl_info with ~XLR_INFO_MASK. * pg_xlogdump didn't map a NULL rm_identify to UNKNOWN or a similar string. * append_init() was called when id=NULL - which should never actually happen. But it's better to be careful.
* Add a fast pre-check for equality of equal-length strings.Robert Haas2014-09-19
| | | | | | | | | Testing reveals that that doing a memcmp() before the strcoll() costs practically nothing, at least on the systems we tested, and it speeds up sorts containing many equal strings significatly. Peter Geoghegan. Review by myself and Heikki Linnakangas. Comments rewritten by me.
* Row-Level Security Policies (RLS)Stephen Frost2014-09-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building on the updatable security-barrier views work, add the ability to define policies on tables to limit the set of rows which are returned from a query and which are allowed to be added to a table. Expressions defined by the policy for filtering are added to the security barrier quals of the query, while expressions defined to check records being added to a table are added to the with-check options of the query. New top-level commands are CREATE/ALTER/DROP POLICY and are controlled by the table owner. Row Security is able to be enabled and disabled by the owner on a per-table basis using ALTER TABLE .. ENABLE/DISABLE ROW SECURITY. Per discussion, ROW SECURITY is disabled on tables by default and must be enabled for policies on the table to be used. If no policies exist on a table with ROW SECURITY enabled, a default-deny policy is used and no records will be visible. By default, row security is applied at all times except for the table owner and the superuser. A new GUC, row_security, is added which can be set to ON, OFF, or FORCE. When set to FORCE, row security will be applied even for the table owner and superusers. When set to OFF, row security will be disabled when allowed and an error will be thrown if the user does not have rights to bypass row security. Per discussion, pg_dump sets row_security = OFF by default to ensure that exports and backups will have all data in the table or will error if there are insufficient privileges to bypass row security. A new option has been added to pg_dump, --enable-row-security, to ask pg_dump to export with row security enabled. A new role capability, BYPASSRLS, which can only be set by the superuser, is added to allow other users to be able to bypass row security using row_security = OFF. Many thanks to the various individuals who have helped with the design, particularly Robert Haas for his feedback. Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean Rasheed, with additional changes and rework by me. Reviewers have included all of the above, Greg Smith, Jeff McCormick, and Robert Haas.
* Add rmgr callback to name xlog record types for display purposes.Andres Freund2014-09-19
| | | | | | | | | | | | | | | | | | | This is primarily useful for the upcoming pg_xlogdump --stats feature, but also allows to remove some duplicated code in the rmgr_desc routines. Due to the separation and harmonization, the output of dipsplayed records changes somewhat. But since this isn't enduser oriented content that's ok. It's potentially desirable to further change pg_xlogdump's display of records. It previously wasn't possible to show the record type separately from the description forcing it to be in the last column. But that's better done in a separate commit. Author: Abhijit Menon-Sen, slightly editorialized by me Reviewed-By: Álvaro Herrera, Andres Freund, and Heikki Linnakangas Discussion: 20140604104716.GA3989@toroid.org
* Fix pointer type in size passed to memset.Heikki Linnakangas2014-09-14
| | | | | | | Pointers are all the same size, so it makes no practical difference, but let's be tidy. Found by Coverity, noted off-list by Tom Lane.