aboutsummaryrefslogtreecommitdiff
path: root/src/backend/parser/parse_expr.c
Commit message (Collapse)AuthorAge
* Make some spelling more consistentPeter Eisentraut2013-01-05
|
* Update copyrights for 2013Bruce Momjian2013-01-01
| | | | | Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
* Prevent failure when RowExpr or XmlExpr is parse-analyzed twice.Tom Lane2012-12-23
| | | | | | | | | | | | transformExpr() is required to cope with already-transformed expression trees, for various ugly-but-not-quite-worth-cleaning-up reasons. However, some of its newer subroutines hadn't gotten the memo. This accounts for bug #7763 from Norbert Buchmuller: transformRowExpr() was overwriting the previously determined type of a RowExpr during CREATE TABLE LIKE INCLUDING INDEXES. Additional investigation showed that transformXmlExpr had the same kind of problem, but all the other cases seem to be safe. Andres Freund and Tom Lane
* Centralize the logic for detecting misplaced aggregates, window funcs, etc.Tom Lane2012-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Formerly we relied on checking after-the-fact to see if an expression contained aggregates, window functions, or sub-selects when it shouldn't. This is grotty, easily forgotten (indeed, we had forgotten to teach DefineIndex about rejecting window functions), and none too efficient since it requires extra traversals of the parse tree. To improve matters, define an enum type that classifies all SQL sub-expressions, store it in ParseState to show what kind of expression we are currently parsing, and make transformAggregateCall, transformWindowFuncCall, and transformSubLink check the expression type and throw error if the type indicates the construct is disallowed. This allows removal of a large number of ad-hoc checks scattered around the code base. The enum type is sufficiently fine-grained that we can still produce error messages of at least the same specificity as before. Bringing these error checks together revealed that we'd been none too consistent about phrasing of the error messages, so standardize the wording a bit. Also, rewrite checking of aggregate arguments so that it requires only one traversal of the arguments, rather than up to three as before. In passing, clean up some more comments left over from add_missing_from support, and annotate some tests that I think are dead code now that that's gone. (I didn't risk actually removing said dead code, though.)
* Implement SQL-standard LATERAL subqueries.Tom Lane2012-08-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements the standard syntax of LATERAL attached to a sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM, since set-returning function calls are expected to be one of the principal use-cases. The main change here is a rewrite of the mechanism for keeping track of which relations are visible for column references while the FROM clause is being scanned. The parser "namespace" lists are no longer lists of bare RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE pointer as well as some visibility-controlling flags. Aside from supporting LATERAL correctly, this lets us get rid of the ancient hacks that required rechecking subqueries and JOIN/ON and function-in-FROM expressions for invalid references after they were initially parsed. Invalid column references are now always correctly detected on sight. In passing, remove assorted parser error checks that are now dead code by virtue of our having gotten rid of add_missing_from, as well as some comments that are obsolete for the same reason. (It was mainly add_missing_from that caused so much fudging here in the first place.) The planner support for this feature is very minimal, and will be improved in future patches. It works well enough for testing purposes, though. catversion bump forced due to new field in RangeTblEntry.
* Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian2012-06-10
| | | | commit-fest.
* Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.Tom Lane2012-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Making this operation look like a utility statement seems generally a good idea, and particularly so in light of the desire to provide command triggers for utility statements. The original choice of representing it as SELECT with an IntoClause appendage had metastasized into rather a lot of places, unfortunately, so that this patch is a great deal more complicated than one might at first expect. In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS subcommands required restructuring some EXPLAIN-related APIs. Add-on code that calls ExplainOnePlan or ExplainOneUtility, or uses ExplainOneQuery_hook, will need adjustment. Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO, which formerly were accepted though undocumented, are no longer accepted. The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE. The CREATE RULE case doesn't seem to have much real-world use (since the rule would work only once before failing with "table already exists"), so we'll not bother with that one. Both SELECT INTO and CREATE TABLE AS still return a command tag of "SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn", but for the moment backwards compatibility wins the day. Andres Freund and Tom Lane
* Preserve column names in the execution-time tupledesc for a RowExpr.Tom Lane2012-02-14
| | | | | | | | | | | | | | | The hstore and json datatypes both have record-conversion functions that pay attention to column names in the composite values they're handed. We used to not worry about inserting correct field names into tuple descriptors generated at runtime, but given these examples it seems useful to do so. Observe the nicer-looking results in the regression tests whose results changed. catversion bump because there is a subtle change in requirements for stored rule parsetrees: RowExprs from ROW() constructs now have to include field names. Andrew Dunstan and Tom Lane
* Update copyright notices for year 2012.Bruce Momjian2012-01-01
|
* Ensure that whole-row junk Vars are always of composite type.Tom Lane2011-11-27
| | | | | | | | | | | | | | | | | | | | | | | The EvalPlanQual machinery assumes that whole-row Vars generated for the outputs of non-table RTEs will be of composite types. However, for the case where the RTE is a function call returning a scalar type, we were doing the wrong thing, as a result of sharing code with a parser case where the function's scalar output is wanted. (Or at least, that's what that case has done historically; it does seem a bit inconsistent.) To fix, extend makeWholeRowVar's API so that it can support both use-cases. This fixes Belinda Cussen's report of crashes during concurrent execution of UPDATEs involving joins to the result of UNNEST() --- in READ COMMITTED mode, we'd run the EvalPlanQual machinery after a conflicting row update commits, and it was expecting to get a HeapTuple not a scalar datum from the "wholerowN" variable referencing the function RTE. Back-patch to 9.0 where the current EvalPlanQual implementation appeared. In 9.1 and up, this patch also fixes failure to attach the correct collation to the Var generated for a scalar-result case. An example: regression=# select upper(x.*) from textcat('ab', 'cd') x; ERROR: could not determine which collation to use for upper() function
* Don't let transform_null_equals=on affect CASE foo WHEN NULL ... constructs.Heikki Linnakangas2011-10-08
| | | | | | | | | transform_null_equals is only supposed to affect "foo = NULL" expressions given directly by the user, not the internal "foo = NULL" expression generated from CASE-WHEN. This fixes bug #6242, reported by Sergey. Backpatch to all supported branches.
* Remove assumptions that not-equals operators cannot be in any opclass.Tom Lane2011-07-06
| | | | | | | | | | | | | | | get_op_btree_interpretation assumed this in order to save some duplication of code, but it's not true in general anymore because we added <> support to btree_gist. (We still assume it for btree opclasses, though.) Also, essentially the same logic was baked into predtest.c. Get rid of that duplication by generalizing get_op_btree_interpretation so that it can be used by predtest.c. Per bug report from Denis de Bernardy and investigation by Jeff Davis, though I didn't use Jeff's patch exactly as-is. Back-patch to 9.1; we do not support this usage before that.
* pgindent run before PG 9.1 beta 1.Bruce Momjian2011-04-10
|
* Revise collation derivation method and expression-tree representation.Tom Lane2011-03-19
| | | | | | | | | | | | | | | | | | | All expression nodes now have an explicit output-collation field, unless they are known to only return a noncollatable data type (such as boolean or record). Also, nodes that can invoke collation-aware functions store a separate field that is the collation value to pass to the function. This avoids confusion that arises when a function has collatable inputs and noncollatable output type, or vice versa. Also, replace the parser's on-the-fly collation assignment method with a post-pass over the completed expression tree. This allows us to use a more complex (and hopefully more nearly spec-compliant) assignment rule without paying for it in extra storage in every expression node. Fix assorted bugs in the planner's handling of collations by making collation one of the defining properties of an EquivalenceClass and by converting CollateExprs into discardable RelabelType nodes during expression preprocessing.
* Split CollateClause into separate raw and analyzed node types.Tom Lane2011-03-11
| | | | | | | | | | | CollateClause is now used only in raw grammar output, and CollateExpr after parse analysis. This is for clarity and to avoid carrying collation names in post-analysis parse trees: that's both wasteful and possibly misleading, since the collation's name could be changed while the parsetree still exists. Also, clean up assorted infelicities and omissions in processing of the node type.
* Remove collation information from TypeName, where it does not belong.Tom Lane2011-03-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial collations patch treated a COLLATE spec as part of a TypeName, following what can only be described as brain fade on the part of the SQL committee. It's a lot more reasonable to treat COLLATE as a syntactically separate object, so that it can be added in only the productions where it actually belongs, rather than needing to reject it in a boatload of places where it doesn't belong (something the original patch mostly failed to do). In addition this change lets us meet the spec's requirement to allow COLLATE anywhere in the clauses of a ColumnDef, and it avoids unfriendly behavior for constructs such as "foo::type COLLATE collation". To do this, pull collation information out of TypeName and put it in ColumnDef instead, thus reverting most of the collation-related changes in parse_type.c's API. I made one additional structural change, which was to use a ColumnDef as an intermediate node in AT_AlterColumnType AlterTableCmd nodes. This provides enough room to get rid of the "transform" wart in AlterTableCmd too, since the ColumnDef can carry the USING expression easily enough. Also fix some other minor bugs that have crept in in the same areas, like failure to copy recently-added fields of ColumnDef in copyfuncs.c. While at it, document the formerly secret ability to specify a collation in ALTER TABLE ALTER COLUMN TYPE, ALTER TYPE ADD ATTRIBUTE, and ALTER TYPE ALTER ATTRIBUTE TYPE; and correct some misstatements about what the default collation selection will be when COLLATE is omitted. BTW, the three-parameter form of format_type() should go away too, since it just contributes to the confusion in this area; but I'll do that in a separate patch.
* Per-column collation supportPeter Eisentraut2011-02-08
| | | | | | | | This adds collation support for columns and domains, a COLLATE clause to override it per expression, and B-tree index support. Peter Eisentraut reviewed by Pavel Stehule, Itagaki Takahiro, Robert Haas, Noah Misch
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Refactor typenameTypeId()Peter Eisentraut2010-10-25
| | | | | | Split the old typenameTypeId() into two functions: A new typenameTypeId() that returns only a type OID, and typenameTypeIdAndMod() that returns type OID and typmod. This isolates call sites better that actually care about the typmod.
* Improve handling of domains over arrays.Tom Lane2010-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch eliminates various bizarre behaviors caused by sloppy thinking about the difference between a domain type and its underlying array type. In particular, the operation of updating one element of such an array has to be considered as yielding a value of the underlying array type, *not* a value of the domain, because there's no assurance that the domain's CHECK constraints are still satisfied. If we're intending to store the result back into a domain column, we have to re-cast to the domain type so that constraints are re-checked. For similar reasons, such a domain can't be blindly matched to an ANYARRAY polymorphic parameter, because the polymorphic function is likely to apply array-ish operations that could invalidate the domain constraints. For the moment, we just forbid such matching. We might later wish to insert an automatic downcast to the underlying array type, but such a change should also change matching of domains to ANYELEMENT for consistency. To ensure that all such logic is rechecked, this patch removes the original hack of setting a domain's pg_type.typelem field to match its base type; the typelem will always be zero instead. In those places where it's really okay to look through the domain type with no other logic changes, use the newly added get_base_element_type function in place of get_element_type. catversion bumped due to change in pg_type contents. Per bug #5717 from Richard Huxton and subsequent discussion.
* Fix incorrect generation of whole-row variables in planner.Tom Lane2010-10-19
| | | | | | | | | | | | | | A couple of places in the planner need to generate whole-row Vars, and were cutting corners by setting vartype = RECORDOID in the Vars, even in cases where there's an identifiable named composite type for the RTE being referenced. While we mostly got away with this, it failed when there was also a parser-generated whole-row reference to the same RTE, because the two Vars weren't equal() due to the difference in vartype. Fix by providing a subroutine the planner can call to generate whole-row Vars the same way the parser does. Per bug #5716 from Andrew Tipton. Back-patch to 9.0 where one of the bogus calls was introduced (the other one is new in HEAD).
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* Improved version of patch to protect pg_get_expr() against misuse:Tom Lane2010-07-29
| | | | | | | | | | | look through join alias Vars to avoid breaking join queries, and move the test to someplace where it will catch more possible ways of calling a function. We still ought to throw away the whole thing in favor of a data-type-based solution, but that's not feasible in the back branches. This needs to be back-patched further than 9.0, but I don't have time to do so today. Committing now so that the fix gets into 9.0beta4.
* pgindent run for 9.0, second runBruce Momjian2010-07-06
|
* stringToNode() and deparse_expression_pretty() crash on invalid input,Heikki Linnakangas2010-06-30
| | | | | | | | | | but we have nevertheless exposed them to users via pg_get_expr(). It would be too much maintenance effort to rigorously check the input, so put a hack in place instead to restrict pg_get_expr() so that the argument must come from one of the system catalog columns known to contain valid expressions. Per report from Rushabh Lathia. Backpatch to 7.4 which is the oldest supported version at the moment.
* pgindent run for 9.0Bruce Momjian2010-02-26
|
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Add an "argisrow" field to NullTest nodes, following a plan made way back inTom Lane2010-01-01
| | | | | | 8.2beta but never carried out. This avoids repetitive tests of whether the argument is of scalar or composite type. Also, be a bit more paranoid about composite arguments in some places where we previously weren't checking.
* Support ORDER BY within aggregate function calls, at long last providing aTom Lane2009-12-15
| | | | | | | | | | | | | non-kluge method for controlling the order in which values are fed to an aggregate function. At the same time eliminate the old implementation restriction that DISTINCT was only supported for single-argument aggregates. Possibly release-notable behavioral change: formerly, agg(DISTINCT x) dropped null values of x unconditionally. Now, it does so only if the agg transition function is strict; otherwise nulls are treated as DISTINCT normally would, ie, you get one copy. Andrew Gierth, reviewed by Hitoshi Harada
* A better fix for the "ARRAY[...]::domain" problem. The previous patch worked,Heikki Linnakangas2009-11-13
| | | | | | | but the transformed ArrayExpr claimed to have a return type of "domain", even though the domain constraint was only checked by the enclosing CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with the base type of the domain. Per gripe by Tom Lane.
* When you do "ARRAY[...]::domain", where domain is a domain over an array type,Heikki Linnakangas2009-11-13
| | | | | | | | | we need to check domain constraints. We used to do it correctly, but 8.4 introduced a separate code path for the "ARRAY[]::arraytype" case to infer the type of an empty ARRAY construct from the cast target, and forgot to take domains into account. Per report from Florian G. Pflug.
* Fix WHERE CURRENT OF to work as designed within plpgsql. The argumentTom Lane2009-11-09
| | | | | | | can be the name of a plpgsql cursor variable, which formerly was converted to $N before the core parser saw it, but that's no longer the case. Deal with plain name references to plpgsql variables, and add a regression test case that exposes the failure.
* Implement parser hooks for processing ColumnRef and ParamRef nodes, as per myTom Lane2009-10-31
| | | | | | | | | | | | | recent proposal. As proof of concept, remove knowledge of Params from the core parser, arranging for them to be handled entirely by parser hook functions. It turns out we need an additional hook for that --- I had forgotten about the code that handles inferring a parameter's type from context. This is a preliminary step towards letting plpgsql handle its variables through parser hooks. Additional work remains to be done to expose the facility through SPI, but I think this is all the changes needed in the core parser.
* Make FOR UPDATE/SHARE in the primary query not propagate into WITH queries;Tom Lane2009-10-27
| | | | | | | | | | | | | | | | | | | | | | | | for example in WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE the FOR UPDATE will now affect bar but not foo. This is more useful and consistent than the original 8.4 behavior, which tried to propagate FOR UPDATE into the WITH query but always failed due to assorted implementation restrictions. Even though we are in process of removing those restrictions, it seems correct on philosophical grounds to not let the outer query's FOR UPDATE affect the WITH query. In passing, fix isLockedRel which frequently got things wrong in nested-subquery cases: "FOR UPDATE OF foo" applies to an alias foo in the current query level, not subqueries. This has been broken for a long time, but it doesn't seem worth back-patching further than 8.4 because the actual consequences are minimal. At worst the parser would sometimes get RowShareLock on a relation when it should be AccessShareLock or vice versa. That would only make a difference if someone were using ExclusiveLock concurrently, which no standard operation does, and anyway FOR UPDATE doesn't result in visible changes so it's not clear that the someone would notice any problem. Between that and the fact that FOR UPDATE barely works with subqueries at all in existing releases, I'm not excited about worrying about it.
* Remove add_missing_from GUC and associated parser support for "implicit RTEs".Tom Lane2009-10-21
| | | | | | Per recent discussion, add_missing_from has been deprecated for long enough to consider removing, and it's getting in the way of planned parser refactoring. The system now always behaves as though add_missing_from were OFF.
* Support use of function argument names to identify which actual argumentsTom Lane2009-10-08
| | | | | | | match which function parameters. The syntax uses AS, for example funcname(value AS arg1, anothervalue AS arg2) Pavel Stehule
* Fix bug with WITH RECURSIVE immediately inside WITH RECURSIVE. 99% of theTom Lane2009-09-09
| | | | | | | | | | | code was already okay with this, but the hack that obtained the output column types of a recursive union in advance of doing real parse analysis of the recursive union forgot to handle the case where there was an inner WITH clause available to the non-recursive term. Best fix seems to be to refactor so that we don't need the "throwaway" parse analysis step at all. Instead, teach the transformSetOperationStmt code to set up the CTE's output column information after it's processed the non-recursive term normally. Per report from David Fetter.
* Make backend header files C++ safePeter Eisentraut2009-07-16
| | | | | | | | | | | This alters various incidental uses of C++ key words to use other similar identifiers, so that a C++ compiler won't choke outright. You still (probably) need extern "C" { }; around the inclusion of backend headers. based on a patch by Kurt Harriman <harriman@acm.org> Also add a script cpluspluscheck to check for C++ compatibility in the future. As of right now, this passes without error for me.
* 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef listBruce Momjian2009-06-11
| | | | provided by Andrew.
* Support column-level privileges, as required by SQL standard.Tom Lane2009-01-22
| | | | Stephen Frost, with help from KaiGai Kohei and others
* Update copyright for 2009.Bruce Momjian2009-01-01
|
* Support window functions a la SQL:2008.Tom Lane2008-12-28
| | | | Hitoshi Harada, with some kibitzing from Heikki and Tom.
* Better solution to the IN-list issue: instead of having an arbitrary cutoff,Tom Lane2008-10-26
| | | | | | | treat Var and non-Var IN-list items differently. Only non-Var items are candidates to go into an ANY(ARRAY) construct --- we put all Vars as separate OR conditions on the grounds that that leaves more scope for optimization. Per suggestion from Robert Haas.
* Add a heuristic to transformAExprIn() to make it prefer expanding "x IN (list)"Tom Lane2008-10-25
| | | | | | | | | | | | | into an OR of equality comparisons, rather than x = ANY(ARRAY[...]), when there are Vars in the right-hand side. This avoids a performance regression compared to pre-8.2 releases, in cases where the OR form can be optimized into scans of multiple indexes. Limit the possible downside by preferring this form only when the list isn't very long (I set the cutoff at 32 elements, which is a bit arbitrary but in the right ballpark). Per discussion with Jim Nasby. In passing, also make it try the OR form if it cannot select a common type for the array elements; we've seen a complaint or two about how the OR form worked for such cases and ARRAY doesn't.
* When expanding a whole-row Var into a RowExpr during ResolveNew(), attachTom Lane2008-10-06
| | | | | | | | | | | | the column alias names of the RTE referenced by the Var to the RowExpr. This is needed to allow ruleutils.c to correctly deparse FieldSelect nodes referencing such a construct. Per my recent bug report. Adding a field to RowExpr forces initdb (because of stored rules changes) so this solution is not back-patchable; which is unfortunate because 8.2 and 8.3 have this issue. But it only affects EXPLAIN for some pretty odd corner cases, so we can probably live without a solution for the back branches.
* Add a bunch of new error location reports to parse-analysis error messages.Tom Lane2008-09-01
| | | | | There are still some weak spots around JOIN USING and relation alias lists, but most errors reported within backend/parser/ now have locations.
* Fix the raw-parsetree representation of star (as in SELECT * FROM orTom Lane2008-08-30
| | | | | | SELECT foo.*) so that it cannot be confused with a quoted identifier "*". Instead create a separate node type A_Star to represent this notation. Per pgsql-hackers discussion of 2007-Sep-27.
* Extend the parser location infrastructure to include a location field inTom Lane2008-08-28
| | | | | | | | | | | | | most node types used in expression trees (both before and after parse analysis). This allows us to place an error cursor in many situations where we formerly could not, because the information wasn't available beyond the very first level of parse analysis. There's a fair amount of work still to be done to persuade individual ereport() calls to actually include an error location, but this gets the initdb-forcing part of the work out of the way; and the situation is already markedly better than before for complaints about unimplementable implicit casts, such as CASE and UNION constructs with incompatible alternative data types. Per my proposal of a few days ago.
* Move exprType(), exprTypmod(), expression_tree_walker(), and related routinesTom Lane2008-08-25
| | | | | | into nodes/nodeFuncs, so as to reduce wanton cross-subsystem #includes inside the backend. There's probably more that should be done along this line, but this is a start anyway.
* Arrange to convert EXISTS subqueries that are equivalent to hashable INTom Lane2008-08-22
| | | | | | | | | | | | | | | | | | | | | | | subqueries into the same thing you'd have gotten from IN (except always with unknownEqFalse = true, so as to get the proper semantics for an EXISTS). I believe this fixes the last case within CVS HEAD in which an EXISTS could give worse performance than an equivalent IN subquery. The tricky part of this is that if the upper query probes the EXISTS for only a few rows, the hashing implementation can actually be worse than the default, and therefore we need to make a cost-based decision about which way to use. But at the time when the planner generates plans for subqueries, it doesn't really know how many times the subquery will be executed. The least invasive solution seems to be to generate both plans and postpone the choice until execution. Therefore, in a query that has been optimized this way, EXPLAIN will show two subplans for the EXISTS, of which only one will actually get executed. There is a lot more that could be done based on this infrastructure: in particular it's interesting to consider switching to the hash plan if we start out using the non-hashed plan but find a lot more upper rows going by than we expected. I have therefore left some minor inefficiencies in place, such as initializing both subplans even though we will currently only use one.