aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Fix error handling in temp-file deletion with log_temp_files active.Tom Lane2010-11-08
| | | | | | | | | | | | | | | | | | | | The original coding in FileClose() reset the file-is-temp flag before unlinking the file, so that if control came back through due to an error, it wouldn't try to unlink the file twice. This was correct when written, but when the log_temp_files feature was added, the logging action was put in between those two steps. An error occurring during the logging action --- such as a query cancel --- would result in the unlink not getting done at all, as in recent report from Michael Glaesemann. To fix this, make sure that we do both the stat and the unlink before doing anything that could conceivably CHECK_FOR_INTERRUPTS. There is a judgment call here, which is which log message to emit first: if you can see only one, which should it be? I chose to log unlink failure at the risk of losing the log_temp_files log message --- after all, if the unlink does fail, the temp file is still there for you to see. Back-patch to all versions that have log_temp_files. The code was OK before that.
* Fix permanent memory leak in autovacuum launcherAlvaro Herrera2010-11-08
| | | | | | | | | | | | get_database_list was uselessly allocating its output data, along some created along the way, in a permanent memory context. This didn't matter when autovacuum was a single, short-lived process, but now that the launcher is permanent, it shows up as a permanent leak. To fix, make get_database list allocate its output data in the caller's context, which is in charge of freeing it when appropriate; and the memory leaked by heap_beginscan et al is allocated in a throwaway transaction context.
* Use appendrel planning logic for top-level UNION ALL structures.Tom Lane2010-11-08
| | | | | | | | | | | | | Formerly, we could convert a UNION ALL structure inside a subquery-in-FROM into an appendrel, as a side effect of pulling up the subquery into its parent; but top-level UNION ALL always caused use of plan_set_operations(). That didn't matter too much because you got an Append-based plan either way. However, now that the appendrel code can do things with MergeAppend, it's worthwhile to hack up the top-level case so it also uses appendrels. This is a bit of a stopgap; but going much further than this will require a major rewrite of the planner's set-operations support, which I'm not prepared to undertake now. For the moment let's grab the low-hanging fruit.
* Prevent invoking I/O conversion casts via functional/attribute notation.Tom Lane2010-11-07
| | | | | | | | | | | | | | | PG 8.4 added a built-in feature for casting pretty much any data type to string types (text, varchar, etc). We allowed this to work in any of the historically-allowed syntaxes: CAST(x AS text), x::text, text(x), or x.text. However, multiple complaints have shown that it's too easy to invoke such casts unintentionally in the latter two styles, particularly field selection. To cure the problem with the narrowest possible change of behavior, disallow use of I/O conversion casts from composite types to string types via functional/attribute syntax. The new functionality is still available via cast syntax. In passing, document the equivalence of functional and attribute syntax in a more visible place.
* Implement an "S" option for psql's \dn command.Tom Lane2010-11-06
| | | | | | | \dn without "S" now hides all pg_XXX schemas as well as information_schema. Thus, in a bare database you'll only see "public". ("public" is considered a user schema, not a system schema, mainly because it's droppable.) Per discussion back in late September.
* Add support for detecting register-stack overrun on IA64.Tom Lane2010-11-06
| | | | | | | | | Per recent investigation, the register stack can grow faster than the regular stack depending on compiler and choice of options. To avoid crashes we must check both stacks in check_stack_depth(). Since this is poorly-tested code, committing only to HEAD for the moment ... but we might want to consider back-patching later.
* Make get_stack_depth_rlimit() handle RLIM_INFINITY more sanely.Tom Lane2010-11-06
| | | | | | | | | | | | | Rather than considering this result as meaning "unknown", report LONG_MAX. This won't change what superusers can set max_stack_depth to, but it will cause InitializeGUCOptions() to set the built-in default to 2MB not 100kB. The latter seems like a fairly unreasonable interpretation of "infinity". Per my investigation of odd buildfarm results as well as an old complaint from Heikki. Since this should persuade all the buildfarm animals to use a reasonable stack depth setting during "make check", revert previous patch that dumbed down a recursive regression test to only 5 levels.
* Include the current value of max_stack_depth in stack depth complaints.Tom Lane2010-11-04
| | | | | | | I'm mainly interested in finding out what it is on buildfarm machines, but including the active value in the message seems like good practice in any case. Add the info to the HINT, not the ERROR string, so as not to change the regression tests' expected output.
* Use appendStringInfoString() where appropriate in elog.c.Tom Lane2010-11-04
| | | | | | | | | The nominally equivalent call appendStringInfo(buf, "%s", str) can be significantly slower when str is large. In particular, the former usage in EVALUATE_MESSAGE led to O(N^2) behavior when collecting a large number of context lines, as I found out while testing recursive functions. The other changes are just neatnik-ism and seem unlikely to save anything meaningful, but a cycle shaved is a cycle earned.
* Reimplement planner's handling of MIN/MAX aggregate optimization.Tom Lane2010-11-04
| | | | | | | | | | | | | | | | | | | | | | Per my recent proposal, get rid of all the direct inspection of indexes and manual generation of paths in planagg.c. Instead, set up EquivalenceClasses for the aggregate argument expressions, and let the regular path generation logic deal with creating paths that can satisfy those sort orders. This makes planagg.c a bit more visible to the rest of the planner than it was originally, but the approach is basically a lot cleaner than before. A major advantage of doing it this way is that we get MIN/MAX optimization on inheritance trees (using MergeAppend of indexscans) practically for free, whereas in the old way we'd have had to add a whole lot more duplicative logic. One small disadvantage of this approach is that MIN/MAX aggregates can no longer exploit partial indexes having an "x IS NOT NULL" predicate, unless that restriction or something that implies it is specified in the query. The previous implementation was able to use the added "x IS NOT NULL" condition as an extra predicate proof condition, but in this version we rely entirely on indexes that are considered usable by the main planning process. That seems a fair tradeoff for the simplicity and functionality gained.
* Reduce recursion depth in recently-added regression test.Tom Lane2010-11-03
| | | | | | | | | | Some buildfarm members fail the test with the original depth of 10 levels, apparently because they are running at the minimum max_stack_depth setting of 100kB and using ~ 10k per recursion level. While it might be interesting to try to figure out why they're eating so much stack, it isn't likely that any fix for that would be back-patchable. So just change the test to recurse only 5 levels. The extra levels don't prove anything correctness-wise anyway.
* Use only one hash entry for all instances of a pltcl trigger function.Tom Lane2010-11-03
| | | | | | | | | Like plperl and unlike plpgsql, there isn't any cached state that could depend on exactly which relation the trigger is being fired for. So we can use just one hash entry for all relations, which might save a little something. Alex Hunsaker
* Fix adjust_semi_join to be more cautious about clauseless joins.Tom Lane2010-11-02
| | | | | | | It was reporting that these were fully indexed (hence cheap), when of course they're the exact opposite of that. I'm not certain if the case would arise in practice, since a clauseless semijoin is hard to produce in SQL, but if it did happen we'd make some dumb decisions.
* Ensure an index that uses a whole-row Var still depends on its table.Tom Lane2010-11-02
| | | | | | | | | | | | | | | | We failed to record any dependency on the underlying table for an index declared like "create index i on t (foo(t.*))". This would create trouble if the table were dropped without previously dropping the index. To fix, simplify some overly-cute code in index_create(), accepting the possibility that sometimes the whole-table dependency will be redundant. Also document this hazard in dependency.c. Per report from Kevin Grittner. In passing, prevent a core dump in pg_get_indexdef() if the index's table can't be found. I came across this while experimenting with Kevin's example. Not sure it's a real issue when the catalogs aren't corrupt, but might as well be cautious. Back-patch to all supported versions.
* Some cleanup in ecpg code:Michael Meskes2010-11-02
| | | | | | Use bool as type for booleans instead of int. Do not implicitely cast size_t to int. Make the compiler stop complaining about unused variables by adding an empty statement.
* Bootstrap WAL to begin at segment logid=0 logseg=1 (000000010000000000000001)Heikki Linnakangas2010-11-02
| | | | | | | | | | | | | rather than 0/0, so that we can safely use 0/0 as an invalid value. This is a more future-proof fix for the corner-case bug in streaming replication that was fixed yesterday. We had a similar corner-case bug with log/seg 0/0 back in February as well. Avoiding 0/0 as a valid value should prevent bugs like that in the future. Per Tom Lane's idea. Back-patch to 9.0. Since this only affects bootstrapping, it makes no difference to existing installations. We don't need to worry about the bug in existing installations, because if you've managed to get past the initial base backup already, you won't hit the bug in the future either.
* Avoid using a local FunctionCallInfoData struct in ExecMakeFunctionResultTom Lane2010-11-01
| | | | | | | | | | | | | | and related routines. We already had a redundant FunctionCallInfoData struct in FuncExprState, but were using that copy only in set-returning-function cases, to avoid keeping function evaluation state in the expression tree for the benefit of plpgsql's "simple expression" logic. But of course that didn't work anyway. Given the recent fixes in plpgsql there is no need to have two separate behaviors here. Getting rid of the local FunctionCallInfoData structs should make things a little faster (because we don't need to do InitFunctionCallInfoData each time), and it also makes for a noticeable reduction in stack space consumption during recursive calls.
* Fix corner-case bug in tracking of latest removed WAL segment duringHeikki Linnakangas2010-11-01
| | | | | | | | | streaming replication. We used log/seg 0/0 to indicate that no WAL segments have been removed since startup, but 0/0 is a valid value for the very first WAL segment after initdb. To make that disambiguous, store (latest removed WAL segment + 1) in the global variable. Per report from Matt Chesler, also reproduced by Greg Smith.
* Revert removal of trigger flag from plperl function hash key.REL9_1_ALPHA2Tom Lane2010-10-31
| | | | | | | | As noted by Jan Urbanski, this flag is in fact needed to ensure that the function's input/result conversion functions are set up as expected. Add a regression test to discourage anyone from making same mistake in future.
* Provide hashing support for arrays.Tom Lane2010-10-30
| | | | | | | | | | | | | | | | | | | | | The core of this patch is hash_array() and associated typcache infrastructure, which works just about exactly like the existing support for array comparison. In addition I did some work to ensure that the planner won't think that an array type is hashable unless its element type is hashable, and similarly for sorting. This includes adding a datatype parameter to op_hashjoinable and op_mergejoinable, and adding an explicit "hashable" flag to SortGroupClause. The lack of a cross-check on the element type was a pre-existing bug in mergejoin support --- but it didn't matter so much before, because if you couldn't sort the element type there wasn't any good alternative to failing anyhow. Now that we have the alternative of hashing the array type, there are cases where we can avoid a failure by being picky at the planner stage, so it's time to be picky. The issue of exactly how to combine the per-element hash values to produce an array hash is still open for discussion, but the rest of this is pretty solid, so I'll commit it as-is.
* Fix comparisons of pointers with zero to compare with NULL instead.Tom Lane2010-10-29
| | | | | | | Per C standard, these are semantically the same thing; but saying NULL when you mean NULL is good for readability. Marti Raudsepp, per results of INRIA's Coccinelle.
* Oops, missed one fix for EquivalenceClass rearrangement.Tom Lane2010-10-29
| | | | | | | | | | Now that we're expecting a mergeclause's left_ec/right_ec to persist from the initial assignments, we can't just blithely zero these out when transforming such a clause in adjust_appendrel_attrs. But really it should be okay to keep the parent's values, since a child table's derived Var ought to be equivalent to the parent Var for all EquivalenceClass purposes. (Indeed, I'm wondering whether we couldn't find a way to dispense with add_child_rel_equivalences altogether. But this is wrong in any case.)
* Avoid creation of useless EquivalenceClasses during planning.Tom Lane2010-10-29
| | | | | | | | | | | | | | | | | | | | | | Zoltan Boszormenyi exhibited a test case in which planning time was dominated by construction of EquivalenceClasses and PathKeys that had no actual relevance to the query (and in fact got discarded immediately). This happened because we generated PathKeys describing the sort ordering of every index on every table in the query, and only after that checked to see if the sort ordering was relevant. The EC/PK construction code is O(N^2) in the number of ECs, which is all right for the intended number of such objects, but it gets out of hand if there are ECs for lots of irrelevant indexes. To fix, twiddle the handling of mergeclauses a little bit to ensure that every interesting EC is created before we begin path generation. (This doesn't cost anything --- in fact I think it's a bit cheaper than before --- since we always eventually created those ECs anyway.) Then, if an index column can't be found in any pre-existing EC, we know that that sort ordering is irrelevant for the query. Instead of creating a useless EC, we can just not build a pathkey for the index column in the first place. The index will still be considered if it's useful for non-order-related reasons, but we will think of its output as unsorted.
* Give a more specific error message if you try to COMMIT, ROLLBACK or COPYHeikki Linnakangas2010-10-29
| | | | | FROM STDIN in PL/pgSQL. We alread did this for dynamic EXECUTE statements, ie. "EXECUTE 'COMMIT'", but not otherwise.
* Allow generic record arguments to plperl functionsAndrew Dunstan2010-10-28
|
* Add tab completion for psql \dg and \zPeter Eisentraut2010-10-28
| | | | Josh Kupershmidt
* Make \? output of \dg and \du the samePeter Eisentraut2010-10-28
| | | | | | | The previous wording might have suggested that \du only showed login roles and \dg only group roles, but that is no longer the case. proposed by Josh Kupershmidt
* Save a few cycles in plpgsql simple-expression initialization.Tom Lane2010-10-28
| | | | | | | | | | Instead of using ExecPrepareExpr, call ExecInitExpr. The net change here is that we don't apply expression_planner() to the expression tree. There is no need to do so, because that tree is extracted from a fully planned plancache entry, so all the needed work is already done. This reduces the setup costs by about a factor of 2 according to some simple tests. Oversight noted while fooling around with the simple-expression code for previous fix.
* Fix plpgsql's handling of "simple" expression evaluation.Tom Lane2010-10-28
| | | | | | | | | | | | | | | | | In general, expression execution state trees aren't re-entrantly usable, since functions can store private state information in them. For efficiency reasons, plpgsql tries to cache and reuse state trees for "simple" expressions. It can get away with that most of the time, but it can fail if the state tree is dirty from a previous failed execution (as in an example from Alvaro) or is being used recursively (as noted by me). Fix by tracking whether a state tree is in use, and falling back to the "non-simple" code path if so. This results in a pretty considerable speed hit when the non-simple path is taken, but the available alternatives seem even more unpleasant because they add overhead in the simple path. Per idea from Heikki. Back-patch to all supported branches.
* Previous patch had no detectable virtue other than being a one-liner.Tom Lane2010-10-27
| | | | | Try to make the code look self-consistent again, so it doesn't confuse future developers.
* Fix long-standing segfault when accept() or one of the calls made rightHeikki Linnakangas2010-10-27
| | | | | after accepting a connection fails, and the server is compiled with GSSAPI support. Report and patch by Alexander V. Chernikov, bug #5731.
* Fix up some oversights in psql's Unicode-escape support.Tom Lane2010-10-26
| | | | | | Original patch failed to include new exclusive states in a switch that needed to include them; and also was guilty of very fuzzy thinking about how to handle error cases. Per bug #5729 from Alan Choi.
* Add a client authentication hook.Robert Haas2010-10-26
| | | | KaiGai Kohei, with minor cleanup of the comments by me.
* Minor fixups for psql's process_file() function.Robert Haas2010-10-26
| | | | | | | | | | | | - Avoid closing stdin, since we didn't open it. Previously multiple inclusions of stdin would be terminated with a single quit, now a separate quit is needed for each invocation. Previous behavior also accessed stdin after it was fclose()d, which is undefined behavior per ANSI C. - Properly restore pset.inputfile, since the caller expects to be able to free that memory. Marti Raudsepp
* Fix dumb typo in SECURITY LABEL error message.Robert Haas2010-10-26
| | | | Report by Peter Eisentraut.
* Before removing backup_label and irrevocably changing pg_control file, checkHeikki Linnakangas2010-10-26
| | | | | | | | that WAL file containing the checkpoint redo-location can be found. This avoids making the cluster irrecoverable if the redo location is in an earlie WAL file than the checkpoint record. Report, analysis and patch by Jeff Davis, with small changes by me.
* Add missing newlines at end of filesPeter Eisentraut2010-10-26
|
* Fix typos "are are".Itagaki Takahiro2010-10-26
|
* Refactor typenameTypeId()Peter Eisentraut2010-10-25
| | | | | | Split the old typenameTypeId() into two functions: A new typenameTypeId() that returns only a type OID, and typenameTypeIdAndMod() that returns type OID and typmod. This isolates call sites better that actually care about the typmod.
* Fix overly-enthusiastic Assert in printing of Param reference expressions.Tom Lane2010-10-25
| | | | | | | | A NestLoopParam's value can only be a Var or Aggref, but this isn't the case in general for SubPlan parameters, so print_parameter_expr had better be prepared to cope. Brain fade in my recent patch to print the referenced expression instead of just printing $N for PARAM_EXEC Params. Per report from Pavel Stehule.
* Fix inline_set_returning_function() to preserve the invalItems list properly.Tom Lane2010-10-25
| | | | | | | | | This avoids a possible crash when inlining a SRF whose argument list contains a reference to an inline-able user function. The crash is quite reproducible with CLOBBER_FREED_MEMORY enabled, but would be less certain in a production build. Problem introduced in 9.0 by the named-arguments patch, which requires invoking eval_const_expressions() before we can try to inline a SRF. Per report from Brendan Jurd.
* Work around rounding misbehavior exposed by buildfarm.Tom Lane2010-10-25
|
* Remove unnecessary use of trigger flag to hash plperl functionsAndrew Dunstan2010-10-24
|
* Allow new values to be added to an existing enum type.Tom Lane2010-10-24
| | | | | | | After much expenditure of effort, we've got this to the point where the performance penalty is pretty minimal in typical cases. Andrew Dunstan, reviewed by Brendan Jurd, Dean Rasheed, and Tom Lane
* Support suffix matching of host names in pg_hba.confPeter Eisentraut2010-10-24
| | | | | A name starting with a dot can be used to match a suffix of the actual host name (e.g., .example.com matches foo.example.com).
* Add semicolon, missed in previous patch. And update the keyword list inHeikki Linnakangas2010-10-22
| | | | the docs to reflect that OFF is now unreserved. Spotted by Tom Lane.
* Make OFF keyword unreserved. It's not hard to imagine wanting to use 'off'Heikki Linnakangas2010-10-22
| | | | | | | | as a variable or column name, and it's not reserved in recent versions of the SQL spec either. This became particularly annoying in 9.0, before that PL/pgSQL replaced variable names in queries with parameter markers, so it was possible to use OFF and many other backend parser keywords as variable names. Because of that, backpatch to 9.0.
* Improve handling of domains over arrays.Tom Lane2010-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch eliminates various bizarre behaviors caused by sloppy thinking about the difference between a domain type and its underlying array type. In particular, the operation of updating one element of such an array has to be considered as yielding a value of the underlying array type, *not* a value of the domain, because there's no assurance that the domain's CHECK constraints are still satisfied. If we're intending to store the result back into a domain column, we have to re-cast to the domain type so that constraints are re-checked. For similar reasons, such a domain can't be blindly matched to an ANYARRAY polymorphic parameter, because the polymorphic function is likely to apply array-ish operations that could invalidate the domain constraints. For the moment, we just forbid such matching. We might later wish to insert an automatic downcast to the underlying array type, but such a change should also change matching of domains to ANYELEMENT for consistency. To ensure that all such logic is rechecked, this patch removes the original hack of setting a domain's pg_type.typelem field to match its base type; the typelem will always be zero instead. In those places where it's really okay to look through the domain type with no other logic changes, use the newly added get_base_element_type function in place of get_element_type. catversion bumped due to change in pg_type contents. Per bug #5717 from Richard Huxton and subsequent discussion.
* Remove obsolete comment, per Josh Kupershmidt.Tom Lane2010-10-20
|
* Don't try to fetch database name when SetTransactionIdLimit() is executedTom Lane2010-10-20
| | | | | | | | | | | | outside a transaction. This repairs brain fade in my patch of 2009-08-30: the reason we had been storing oldest-database name, not OID, in ShmemVariableCache was of course to avoid having to do a catalog lookup at times when it might be unsafe. This error explains why Aleksandr Dushein is having trouble getting out of an XID wraparound state in bug #5718, though not how he got into that state in the first place. I suspect pg_upgrade is at fault there.