aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* Add inheritable ACE when creating a restricted token for execution onMagnus Hagander2009-11-14
| | | | | | | | Win32. Also refactor the code around it to be more clear. Jesse Morris
* Clean up a couple of bizarre code formatting choices in recent CREATE LIKE ↵Tom Lane2009-11-13
| | | | patch.
* Avoid assuming that enum CreateStmtLikeOption is unsigned. Zdenek KotalaTom Lane2009-11-13
|
* Add control knobs for plpgsql's variable resolution behavior, and make theTom Lane2009-11-13
| | | | | | | | | | | | | | | | | default be "throw error on conflict", as per discussions. The GUC variable is plpgsql.variable_conflict, with values "error", "use_variable", "use_column". The behavior can also be specified per-function by inserting one of #variable_conflict error #variable_conflict use_variable #variable_conflict use_column at the start of the function body. The 8.5 release notes will need to mention using "use_variable" to retain backward-compatible behavior, although we should encourage people to migrate to the much less mistake-prone "error" setting. Update the plpgsql documentation to match this and other recent changes.
* A better fix for the "ARRAY[...]::domain" problem. The previous patch worked,Heikki Linnakangas2009-11-13
| | | | | | | but the transformed ArrayExpr claimed to have a return type of "domain", even though the domain constraint was only checked by the enclosing CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with the base type of the domain. Per gripe by Tom Lane.
* When you do "ARRAY[...]::domain", where domain is a domain over an array type,Heikki Linnakangas2009-11-13
| | | | | | | | | we need to check domain constraints. We used to do it correctly, but 8.4 introduced a separate code path for the "ARRAY[]::arraytype" case to infer the type of an empty ARRAY construct from the cast target, and forgot to take domains into account. Per report from Florian G. Pflug.
* Fix multicolumn GIN's wrong results with fastupdate enabled.Teodor Sigaev2009-11-13
| | | | | | | | User-defined consistent functions believes the check array contains at least one true element which was not a true for scanning pending list. Per report from Yury Don <yura@vpcit.ru>
* The recent patch to log changes in postgresql.conf settings dumped coreTom Lane2009-11-12
| | | | | if the initial value of a string variable was NULL, which is entirely possible. Noted while experimenting with custom_variable_classes.
* Check for C/POSIX before assuming that nl_langinfo or win32_langinfoTom Lane2009-11-12
| | | | will work. Per buildfarm results.
* Make initdb behave sanely when the selected locale has codeset "US-ASCII".Tom Lane2009-11-12
| | | | | | | | | | | | | | Per discussion, this should result in defaulting to SQL_ASCII encoding. The original coding could not support that because it conflated selection of SQL_ASCII encoding with not being able to determine the encoding. Adjust pg_get_encoding_from_locale()'s API to distinguish these cases, and fix callers appropriately. Only initdb actually changes behavior, since the other callers were perfectly content to consider these cases equivalent. Per bug #5178 from Boh Yap. Not going to bother back-patching, since no one has complained before and there's an easy workaround (namely, specify the encoding you want).
* Remove pg_parse_string_token() --- not needed anymore.Tom Lane2009-11-12
|
* Remove plpgsql's separate lexer (finally!), in favor of using the core lexerTom Lane2009-11-12
| | | | | | | directly. This was a lot of trouble, but should be worth it in terms of not having to keep the plpgsql lexer in step with core anymore. In addition the handling of keywords is significantly better-structured, allowing us to de-reserve a number of words that plpgsql formerly treated as reserved.
* In psql \du, separate the role attributes by comma instead of newline,Peter Eisentraut2009-11-11
| | | | for an arguably more pleasant display.
* Change "name" nonterminal in cursor-related productions to cursor_name.Alvaro Herrera2009-11-11
| | | | | | | This is a preparatory patch for allowing a dynamic cursor name be used in the ECPG grammar. Author: Zoltan Boszormenyi
* Support optional FROM/IN in FETCH and MOVEAlvaro Herrera2009-11-11
| | | | | | | | | | The main motivation for this is that it's required for Informix compatibility in ECPG. This patch makes the ECPG and core grammars a bit closer to one another for these productions. Author: Zoltan Boszormenyi
* Do not build psql's flex module on its own, but instead include it inTom Lane2009-11-10
| | | | | | | | | mainloop.c. This ensures that postgres_fe.h is read before including any system headers, which is necessary to avoid problems on some platforms where we make nondefault selections of feature macros for stdio.h or other headers. We have had this policy for flex modules in the backend for many years, but for some reason it was not applied to psql. Per trouble report from Alexandra Roy and diagnosis by Albe Laurenz.
* Revert the temporary patch to work around Snow Leopard readdir() bug.Tom Lane2009-11-10
| | | | | | | | Apple has fixed that bug in 10.6.2, and we should encourage users to update to that version rather than trusting this cosmetic patch. As was recently noted by Stephen Tyler, this patch was only masking the problem in the context of DROP TABLESPACE, but the failure could occur in other places such as pg_xlog cleanup.
* interval_abs():Bruce Momjian2009-11-10
| | | | | | | | Add C comment about why there is no interval_abs(): it is unclear what value to return: http://archives.postgresql.org/pgsql-general/2009-10/msg01031.php http://archives.postgresql.org/pgsql-general/2009-11/msg00041.php
* Fix longstanding problems in VACUUM caused by untimely interruptionsAlvaro Herrera2009-11-10
| | | | | | | | | | | | | | | | | | | | In VACUUM FULL, an interrupt after the initial transaction has been recorded as committed can cause postmaster to restart with the following error message: PANIC: cannot abort transaction NNNN, it was already committed This problem has been reported many times. In lazy VACUUM, an interrupt after the table has been truncated by lazy_truncate_heap causes other backends' relcache to still point to the removed pages; this can cause future INSERT and UPDATE queries to error out with the following error message: could not read block XX of relation 1663/NNN/MMMM: read only 0 of 8192 bytes The window to this race condition is extremely narrow, but it has been seen in the wild involving a cancelled autovacuum process. The solution for both problems is to inhibit interrupts in both operations until after the respective transactions have been committed. It's not a complete solution, because the transaction could theoretically be aborted by some other error, but at least fixes the most common causes of both problems.
* More incremental refactoring in plpgsql: get rid of gram.y dependencies onTom Lane2009-11-10
| | | | | | | yytext. This is a necessary change if we're going to have a lexer interface layer that does lookahead, since yytext won't necessarily be in step with what the grammar thinks is the current token. yylval and yylloc should be the only side-variables that we need to manage when doing lookahead.
* Re-refactor the core scanner's API, in order to get out from under the problemTom Lane2009-11-09
| | | | | | | | | | | | | | | | | | of different parsers having different YYSTYPE unions that they want to use with it. I defined a new union core_YYSTYPE that is just the (very short) list of semantic values returned by the core scanner. I had originally worried that this would require an extra interface layer, but actually we can have parser.c's base_yylex (formerly filtered_base_yylex) take care of that at no extra cost. Names associated with the core scanner are now "core_yy_foo", with "base_yy_foo" being used in the core Bison parser and the parser.c interface layer. This solves the last serious stumbling block to eliminating plpgsql's separate lexer. One restriction that will still be present is that plpgsql and the core will have to agree on the token numbers assigned to tokens that can be returned by the core lexer. Since Bison doesn't seem willing to accept external assignments of those numbers, we'll have to live with decreeing that core and plpgsql grammars declare these tokens first and in the same order.
* Fix WHERE CURRENT OF to work as designed within plpgsql. The argumentTom Lane2009-11-09
| | | | | | | can be the name of a plpgsql cursor variable, which formerly was converted to $N before the core parser saw it, but that's no longer the case. Deal with plain name references to plpgsql variables, and add a regression test case that exposes the failure.
* Modernize plpgsql's handling of parse locations, making it look a lot moreTom Lane2009-11-09
| | | | | | | | | | | | | | | | | | | | | | | | like the core parser's code. In particular, track locations at the character rather than line level during parsing, allowing many more parse-time error conditions to be reported with precise error pointers rather than just "near line N". Also, exploit the fact that we no longer need to substitute $N for variable references by making extracted SQL queries and expressions be exact copies of subranges of the function text, rather than having random whitespace changes within them. This makes it possible to directly map parse error positions from the core parser onto positions in the function text, which lets us report them without the previous kluge of showing the intermediate internal-query form. (Later it might be good to do that for core parse-analysis errors too, but this patch is just touching plpgsql's lexer/parser, not what happens at runtime.) In passing, make plpgsql's lexer use palloc not malloc. These changes make plpgsql's parse-time error reports noticeably nicer (as illustrated by the regression test changes), and will also simplify the planned removal of plpgsql's separate lexer by reducing the impedance mismatch between what it does and what the core lexer does.
* Remove ancient text file containing plpgsql installation instructions.Tom Lane2009-11-07
| | | | | This was long ago superseded by the standard build process and main SGML documentation.
* Rearrange plpgsql parsing to simplify and speed it up a bit.Tom Lane2009-11-07
| | | | | | | | | | | | | | | | | | | | * Pull the responsibility for %TYPE and %ROWTYPE out of the scanner, letting read_datatype manage it instead. * Avoid unnecessary scanner-driven lookups of plpgsql variables in places where it's not needed, which is actually most of the time; we do not need it in DECLARE sections nor in text that is a SQL query or expression. * Rationalize the set of token types returned by the scanner: distinguishing T_SCALAR, T_RECORD, T_ROW seems to complicate the grammar in more places than it simplifies it, so merge these into one token type T_DATUM; but split T_ERROR into T_DBLWORD and T_TRIPWORD for clarity and simplicity of later processing. Some of this will need to be revisited again when we try to make plpgsql use the core scanner, but this patch gets some of the bigger stumbling blocks out of the way.
* Keep track of language's trusted flag in InlineCodeBlock. Needed to support ↵Andrew Dunstan2009-11-06
| | | | DO blocks for languages that have both trusted and untrusted variants.
* Change plpgsql from using textual substitution to insert variable referencesTom Lane2009-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | into SQL expressions, to using the newly added parser callback hooks. This allows us to do the substitutions in a more semantically-aware way: a variable reference will only be recognized where it can validly go, ie, a place where a column value or parameter would be legal, instead of the former behavior that would replace any textual match including table names and column aliases (leading to syntax errors later on). A release-note-worthy fine point is that plpgsql variable names that match fully-reserved words will now need to be quoted. This commit preserves the former behavior that variable references take precedence over any possible match to a column name. The infrastructure is in place to support the reverse precedence or throwing an error on ambiguity, but those behaviors aren't accessible yet. Most of the code changes here are associated with making the namespace data structure persist so that it can be consulted at runtime, instead of throwing it away at the end of initial function parsing. The plpgsql scanner is still doing name lookups, but that behavior is now irrelevant for SQL expressions. A future commit will deal with removing unnecessary lookups.
* Don't treat NEW and OLD as reserved words anymore. For the purposes of rulesTom Lane2009-11-05
| | | | | | | | it works just as well to have them be ordinary identifiers, and this gets rid of a number of ugly special cases. Plus we aren't interfering with non-rule usage of these names. catversion bump because the names change internally in stored rules.
* reenable -> re-enablePeter Eisentraut2009-11-05
| | | | Pointed out by Debian's lintian.
* Remove plpgsql's RENAME declaration, which has bizarre and mostly nonfunctionalTom Lane2009-11-05
| | | | | | | | | | behavior, and is so little used that no one has been interested in fixing it. To ensure that possible uses are covered, remove the ALIAS declaration's arbitrary restriction that only $n identifiers can be aliased. (We could alternatively make RENAME act just like ALIAS, but per discussion having two different ways to do the same thing is probably more confusing than helpful.)
* Allow binary-coercible cases in ri_HashCompareOp; there are some such casesTom Lane2009-11-05
| | | | | | that are not handled by find_coercion_pathway, notably composite->RECORD. Now that 8.4 supports composites as primary keys, it's worth dealing with this case.
* Rename some encoding conversion modules to keep pathnames in our sourceTom Lane2009-11-04
| | | | | | | tarballs under 100 characters. This should avoid failures with certain untarring tools (WinZip and Midnight Commander have been mentioned as likely suspects). Per my proposal of yesterday. catversion bumped since the initial contents of pg_proc change.
* Make expression locations for LIKE and SIMILAR TO constructs uniformly pointTom Lane2009-11-04
| | | | | | at the first keyword of the expression, rather than drawing a rather artificial distinction between the ESCAPE subclause and the rest. Per gripe from Gokulakannan Somasundaram and subsequent discusssion.
* Add support for invoking parser callback hooks via SPI and in cached plans.Tom Lane2009-11-04
| | | | | | | | | | | | As proof of concept, modify plpgsql to use the hooks. plpgsql is still inserting $n symbols textually, but the "back end" of the parsing process now goes through the ParamRef hook instead of using a fixed parameter-type array, and then execution only fetches actually-referenced parameters, using a hook added to ParamListInfo. Although there's a lot left to be done in plpgsql, this already cures the "if (TG_OP = 'INSERT' and NEW.foo ...)" problem, as illustrated by the changed regression test.
* Allow rewriting ALTER TABLE to skip WAL logging.Heikki Linnakangas2009-11-04
| | | | Itagaki Takahiro, with small changes by me and Simon.
* Build bzip2 tarball in dist target as wellPeter Eisentraut2009-11-03
|
* Fix regression tests for psql \d view patchPeter Eisentraut2009-11-03
|
* Improve PL/Python elog outputPeter Eisentraut2009-11-03
| | | | | When the elog functions (plpy.info etc.) get a single argument, just print that argument instead of printing the single-member tuple like ('foo',).
* In psql, show view definition only with \d+, not with \dPeter Eisentraut2009-11-03
| | | | | The rationale is that view definitions tend to be long and obscure the main information about the view.
* Fix obscure segfault condition in PL/PythonPeter Eisentraut2009-11-03
| | | | | | | | | | | | | | | In PLy_output(), when the elog() call in the TRY branch throws an exception (this can happen when a statement timeout kicks in, for example), the PyErr_SetString() call in the CATCH branch can cause a segfault, because the Py_XDECREF(so) call before it releases memory that is still used by the sv variable that PyErr_SetString() uses as argument, because sv points into memory owned by so. Backpatched back to 8.0, where this code was introduced. I also threw in a couple of volatile declarations for variables that are used before and after the TRY. I don't think they caused the crash that I observed, but they could become issues.
* Dept of second thoughts: after studying index_getnext() a bit more I realizeTom Lane2009-11-01
| | | | | | that it can scribble on scan->xs_ctup.t_self while following HOT chains, so we can't rely on that to stay valid between hashgettuple() calls. Introduce a private variable in HashScanOpaque, instead.
* Fix two serious bugs introduced into hash indexes by the 8.4 patch that madeTom Lane2009-11-01
| | | | | | | | | | | | | | | | | | | | | | | | hash indexes keep entries sorted by hash value. First, the original plans for concurrency assumed that insertions would happen only at the end of a page, which is no longer true; this could cause scans to transiently fail to find index entries in the presence of concurrent insertions. We can compensate by teaching scans to re-find their position after re-acquiring read locks. Second, neither the bucket split nor the bucket compaction logic had been fixed to preserve hashvalue ordering, so application of either of those processes could lead to permanent corruption of an index, in the sense that searches might fail to find entries that are present. This patch fixes the split and compaction logic to preserve hashvalue ordering, but it cannot do anything about pre-existing corruption. We will need to recommend reindexing all hash indexes in the 8.4.2 release notes. To buy back the performance loss hereby induced in split and compaction, fix them to use PageIndexMultiDelete instead of retail PageIndexDelete operations. We might later want to do something with qsort'ing the page contents rather than doing a binary search for each insertion, but that seemed more invasive than I cared to risk in a back-patch. Per bug #5157 from Jeff Janes and subsequent investigation.
* Ensure the previous Perl interpreter selection is restored upon exit fromTom Lane2009-10-31
| | | | | plperl_call_handler, in both the normal and error-exit paths. Per report from Alexey Klyukin.
* Implement parser hooks for processing ColumnRef and ParamRef nodes, as per myTom Lane2009-10-31
| | | | | | | | | | | | | recent proposal. As proof of concept, remove knowledge of Params from the core parser, arranging for them to be handled entirely by parser hook functions. It turns out we need an additional hook for that --- I had forgotten about the code that handles inferring a parameter's type from context. This is a preliminary step towards letting plpgsql handle its variables through parser hooks. Additional work remains to be done to expose the facility through SPI, but I think this is all the changes needed in the core parser.
* Make the overflow guards in ExecChooseHashTableSize be more protective.Tom Lane2009-10-30
| | | | | | | | | | | | | | | The original coding ensured nbuckets and nbatch didn't exceed INT_MAX, which while not insane on its own terms did nothing to protect subsequent code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size estimates might well be planner error rather than reality, it seems best to constrain the initial sizes to be not more than work_mem/sizeof(pointer), thus ensuring the allocated arrays don't exceed work_mem. We will allow nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches calls, but we should still guard against integer overflow in those palloc requests. Per bug #5145 from Bernt Marius Johnsen. Although the given test case only seems to fail back to 8.2, previous releases have variants of this issue, so patch all supported branches.
* Un-break EXPLAIN for Append plans. I messed this up a few days ago whileTom Lane2009-10-28
| | | | | | adding the ModifyTable node type --- I had been thinking ModifyTable should replace Append as a special case in push_plan(), but actually both of them have to be special-cased.
* Fix \df to re-allow regexp special characters in the function name pattern.Tom Lane2009-10-28
| | | | | | This has always worked, up until somebody's thinko here: http://archives.postgresql.org/pgsql-committers/2009-04/msg00233.php Per bug #5143 from Piotr Wolinski.
* Fix AcquireRewriteLocks to be sure that it acquires the right lock strengthTom Lane2009-10-28
| | | | | | when FOR UPDATE is propagated down into a sub-select expanded from a view. Similar bug to parser's isLockedRel issue that I fixed yesterday; likewise seems not quite worth the effort to back-patch.
* When FOR UPDATE/SHARE is used with LIMIT, put the LockRows plan nodeTom Lane2009-10-28
| | | | | | | | | | | | | | | | | underneath the Limit node, not atop it. This fixes the old problem that such a query might unexpectedly return fewer rows than the LIMIT says, due to LockRows discarding updated rows. There is a related problem that LockRows might destroy the sort ordering produced by earlier steps; but fixing that by pushing LockRows below Sort would create serious performance problems that are unjustified in many real-world applications, as well as potential deadlock problems from locking many more rows than expected. Instead, keep the present semantics of applying FOR UPDATE after ORDER BY within a single query level; but allow the user to specify the other way by writing FOR UPDATE in a sub-select. To make that work, track whether FOR UPDATE appeared explicitly in sub-selects or got pushed down from the parent, and don't flatten a sub-select that contained an explicit FOR UPDATE.
* Fix AfterTriggerSaveEvent to use a test and elog, not just Assert, to checkTom Lane2009-10-27
| | | | | | | | | that it's called within an AfterTriggerBeginQuery/AfterTriggerEndQuery pair. The RI cascade triggers suppress that overhead on the assumption that they are always run non-deferred, so it's possible to violate the condition if someone mistakenly changes pg_trigger to mark such a trigger deferred. We don't really care about supporting that, but throwing an error instead of crashing seems desirable. Per report from Marcelo Costa.