aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/execUtils.c
Commit message (Collapse)AuthorAge
...
* Fix whole-row Var evaluation to cope with resjunk columns (again).Tom Lane2012-07-20
| | | | | | | | | | | | | | | | | | When a whole-row Var is reading the result of a subquery, we need it to ignore any "resjunk" columns that the subquery might have evaluated for GROUP BY or ORDER BY purposes. We've hacked this area before, in commit 68e40998d058c1f6662800a648ff1e1ce5d99cba, but that fix only covered whole-row Vars of named composite types, not those of RECORD type; and it was mighty klugy anyway, since it just assumed without checking that any extra columns in the result must be resjunk. A proper fix requires getting hold of the subquery's targetlist so we can actually see which columns are resjunk (whereupon we can use a JunkFilter to get rid of them). So bite the bullet and add some infrastructure to make that possible. Per report from Andrew Dunstan and additional testing by Merlin Moncure. Back-patch to all supported branches. In 8.3, also back-patch commit 292176a118da6979e5d368a4baf27f26896c99a5, which for some reason I had not done at the time, but it's a prerequisite for this change.
* Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian2012-06-10
| | | | commit-fest.
* Restructure SELECT INTO's parsetree representation into CreateTableAsStmt.Tom Lane2012-03-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | Making this operation look like a utility statement seems generally a good idea, and particularly so in light of the desire to provide command triggers for utility statements. The original choice of representing it as SELECT with an IntoClause appendage had metastasized into rather a lot of places, unfortunately, so that this patch is a great deal more complicated than one might at first expect. In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS subcommands required restructuring some EXPLAIN-related APIs. Add-on code that calls ExplainOnePlan or ExplainOneUtility, or uses ExplainOneQuery_hook, will need adjustment. Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO, which formerly were accepted though undocumented, are no longer accepted. The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE. The CREATE RULE case doesn't seem to have much real-world use (since the rule would work only once before failing with "table already exists"), so we'll not bother with that one. Both SELECT INTO and CREATE TABLE AS still return a command tag of "SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn", but for the moment backwards compatibility wins the day. Andres Freund and Tom Lane
* Update copyright notices for year 2012.Bruce Momjian2012-01-01
|
* Rearrange the implementation of index-only scans.Tom Lane2011-10-11
| | | | | | | | | | | | | | | | | | | | | | This commit changes index-only scans so that data is read directly from the index tuple without first generating a faux heap tuple. The only immediate benefit is that indexes on system columns (such as OID) can be used in index-only scans, but this is necessary infrastructure if we are ever to support index-only scans on expression indexes. The executor is now ready for that, though the planner still needs substantial work to recognize the possibility. To do this, Vars in index-only plan nodes have to refer to index columns not heap columns. I introduced a new special varno, INDEX_VAR, to mark such Vars to avoid confusion. (In passing, this commit renames the two existing special varnos to OUTER_VAR and INNER_VAR.) This allows ruleutils.c to handle them with logic similar to what we use for subplan reference Vars. Since index-only scans are now fundamentally different from regular indexscans so far as their expression subtrees are concerned, I also chose to change them to have their own plan node type (and hence, their own executor source file).
* Remove unnecessary #include references, per pgrminclude script.Bruce Momjian2011-09-01
|
* Fix trigger WHEN conditions when both BEFORE and AFTER triggers exist.Tom Lane2011-08-21
| | | | | | | | | Due to tuple-slot mismanagement, evaluation of WHEN conditions for AFTER ROW UPDATE triggers could crash if there had been a BEFORE ROW trigger fired for the same update. Fix by not trying to overload the use of estate->es_trig_tuple_slot. Per report from Yoran Heling. Back-patch to 9.0, when trigger WHEN conditions were introduced.
* Pass collations to functions in FunctionCallInfoData, not FmgrInfo.Tom Lane2011-04-12
| | | | | | | | | | | Since collation is effectively an argument, not a property of the function, FmgrInfo is really the wrong place for it; and this becomes critical in cases where a cached FmgrInfo is used for varying purposes that might need different collation settings. Fix by passing it in FunctionCallInfoData instead. In particular this allows a clean fix for bug #5970 (record_cmp not working). This requires touching a bit more code than the original method, but nobody ever thought that collations would not be an invasive patch...
* pgindent run before PG 9.1 beta 1.Bruce Momjian2011-04-10
|
* Fix check_exclusion_constraint() to insert correct collations in ScanKeys.Tom Lane2011-03-27
|
* Refactor the executor's API to support data-modifying CTEs better.Tom Lane2011-02-27
| | | | | | | | | | | | | | | | | | | | | | The originally committed patch for modifying CTEs didn't interact well with EXPLAIN, as noted by myself, and also had corner-case problems with triggers, as noted by Dean Rasheed. Those problems show it is really not practical for ExecutorEnd to call any user-defined code; so split the cleanup duties out into a new function ExecutorFinish, which must be called between the last ExecutorRun call and ExecutorEnd. Some Asserts have been added to these functions to help verify correct usage. It is no longer necessary for callers of the executor to call AfterTriggerBeginQuery/AfterTriggerEndQuery for themselves, as this is now done by ExecutorStart/ExecutorFinish respectively. If you really need to suppress that and do it for yourself, pass EXEC_FLAG_SKIP_TRIGGERS to ExecutorStart. Also, refactor portal commit processing to allow for the possibility that PortalDrop will invoke user-defined code. I think this is not actually necessary just yet, since the portal-execution-strategy logic forces any non-pure-SELECT query to be run to completion before we will consider committing. But it seems like good future-proofing.
* Support data-modifying commands (INSERT/UPDATE/DELETE) in WITH.Tom Lane2011-02-25
| | | | | | | | | | | | | | This patch implements data-modifying WITH queries according to the semantics that the updates all happen with the same command counter value, and in an unspecified order. Therefore one WITH clause can't see the effects of another, nor can the outer query see the effects other than through the RETURNING values. And attempts to do conflicting updates will have unpredictable results. We'll need to document all that. This commit just fixes the code; documentation updates are waiting on author. Marko Tiikkaja and Hitoshi Harada
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Create core infrastructure for KNNGIST.Tom Lane2010-12-02
| | | | | | | | | | | | | | | | | | | This is a heavily revised version of builtin_knngist_core-0.9. The ordering operators are no longer mixed in with actual quals, which would have confused not only humans but significant parts of the planner. Instead, ordering operators are carried separately throughout planning and execution. Since the API for ambeginscan and amrescan functions had to be changed anyway, this commit takes the opportunity to rationalize that a bit. RelationGetIndexScan no longer forces a premature index_rescan call; instead, callers of index_beginscan must call index_rescan too. Aside from making the AM-side initialization logic a bit less peculiar, this has the advantage that we do not make a useless extra am_rescan call when there are runtime key values. AMs formerly could not assume that the key values passed to amrescan were actually valid; now they can. Teodor Sigaev and Tom Lane
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* Remove a sanity check in the exclusion-constraint code that prevented usersTom Lane2010-07-16
| | | | | | | | | | | from defining non-self-conflicting constraints. Jeff Davis Note: I (tgl) objected to removing this check in 9.0 on the grounds that it was an important sanity check in new, poorly tested code. However, it should be all right to remove it for 9.1, since we'll get field testing from the 9.0 branch.
* pgindent run for 9.0, second runBruce Momjian2010-07-06
|
* Add C comment that we will have to remove an exclusion constraint checkBruce Momjian2010-05-29
| | | | | | if we ever implement '<>' index opclasses. Jeff Davis
* pgindent run for 9.0Bruce Momjian2010-02-26
|
* Remove old-style VACUUM FULL (which was known for a little while asTom Lane2010-02-08
| | | | | | | | | | | | | | | | | VACUUM FULL INPLACE), along with a boatload of subsidiary code and complexity. Per discussion, the use case for this method of vacuuming is no longer large enough to justify maintaining it; not to mention that we don't wish to invest the work that would be needed to make it play nicely with Hot Standby. Aside from the code directly related to old-style VACUUM FULL, this commit removes support for certain WAL record types that could only be generated within VACUUM FULL, redirect-pointer removal in heap_page_prune, and nontransactional generation of cache invalidation sinval messages (the last being the sticking point for Hot Standby). We still have to retain all code that copes with finding HEAP_MOVED_OFF and HEAP_MOVED_IN flag bits on existing tuples. This can't be removed as long as we want to support in-place update from pre-9.0 databases.
* check_exclusion_constraint didn't actually work correctly for indexTom Lane2010-01-02
| | | | | | | expressions: FormIndexDatum requires the estate's scantuple to already point at the tuple the values are supposedly being extracted from. Adjust test case so that this type of confusion will be exposed. Per report from hubert depesz lubaczewski.
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Add exclusion constraints, which generalize the concept of uniqueness toTom Lane2009-12-07
| | | | | | | | support any indexable commutative operator, not just equality. Two rows violate the exclusion constraint if "row1.col OP row2.col" is TRUE for each of the columns in the constraint. Jeff Davis, reviewed by Robert Haas
* Add a WHEN clause to CREATE TRIGGER, allowing a boolean expression to beTom Lane2009-11-20
| | | | | | | | | | | checked to determine whether the trigger should be fired. For BEFORE triggers this is mostly a matter of spec compliance; but for AFTER triggers it can provide a noticeable performance improvement, since queuing of a deferred trigger event and re-fetching of the row(s) at end of statement can be short-circuited if the trigger does not need to be fired. Takahiro Itagaki, reviewed by KaiGai Kohei.
* Re-implement EvalPlanQual processing to improve its performance and eliminateTom Lane2009-10-26
| | | | | | | | | | | | | | | | | | | | | | | | | a lot of strange behaviors that occurred in join cases. We now identify the "current" row for every joined relation in UPDATE, DELETE, and SELECT FOR UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the appropriate row into each scan node in the rechecking plan, forcing it to emit only that one row. The former behavior could rescan the whole of each joined relation for each recheck, which was terrible for performance, and what's much worse could result in duplicated output tuples. Also, the original implementation of EvalPlanQual could not re-use the recheck execution tree --- it had to go through a full executor init and shutdown for every row to be tested. To avoid this overhead, I've associated a special runtime Param with each LockRows or ModifyTable plan node, and arranged to make every scan node below such a node depend on that Param. Thus, by signaling a change in that Param, the EPQ machinery can just rescan the already-built test plan. This patch also adds a prohibition on set-returning functions in the targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the duplicate-output-tuple problem. It seems fairly reasonable since the other restrictions on SELECT FOR UPDATE are meant to ensure that there is a unique correspondence between source tuples and result tuples, which an output SRF destroys as much as anything else does.
* Move the handling of SELECT FOR UPDATE locking and rechecking out ofTom Lane2009-10-12
| | | | | | | | | | | | | | | | | | | execMain.c and into a new plan node type LockRows. Like the recent change to put table updating into a ModifyTable plan node, this increases planning flexibility by allowing the operations to occur below the top level of the plan tree. It's necessary in any case to restore the previous behavior of having FOR UPDATE locking occur before ModifyTable does. This partially refactors EvalPlanQual to allow multiple rows-under-test to be inserted into the EPQ machinery before starting an EPQ test query. That isn't sufficient to fix EPQ's general bogosity in the face of plans that return multiple rows per test row, though. Since this patch is mostly about getting some plan node infrastructure in place and not about fixing ten-year-old bugs, I will leave EPQ improvements for another day. Another behavioral change that we could now think about is doing FOR UPDATE before LIMIT, but that too seems like it should be treated as a followon patch.
* Remove very ancient tuple-counting infrastructure (IncrRetrieved() andTom Lane2009-10-08
| | | | | | | | | friends). This code has all been ifdef'd out for many years, and doesn't seem to have any prospect of becoming any more useful in the future. EXPLAIN ANALYZE is what people use in practice, and I think if we did want process-wide counters we'd be more likely to put in dtrace events for that than try to resurrect this code. Get rid of it so as to have one less detail to worry about while refactoring execMain.c.
* Replace the array-style TupleTable data structure with a simple List ofTom Lane2009-09-27
| | | | | | | | | | | TupleTableSlot nodes. This eliminates the need to count in advance how many Slots will be needed, which seems more than worth the small increase in the amount of palloc traffic during executor startup. The ExecCountSlots infrastructure is now all dead code, but I'll remove it in a separate commit for clarity. Per a comment from Robert Haas.
* Support deferrable uniqueness constraints.Tom Lane2009-07-29
| | | | | | | | | | The current implementation fires an AFTER ROW trigger for each tuple that looks like it might be non-unique according to the index contents at the time of insertion. This works well as long as there aren't many conflicts, but won't scale to massive unique-key reassignments. Improving that case is a TODO item. Dean Rasheed
* Fix error cleanup failure caused by 8.4 changes in plpgsql to try to avoidTom Lane2009-07-18
| | | | | | | | | | | | | | | | | | | memory leakage in error recovery. We were calling FreeExprContext, and therefore invoking ExprContextCallback callbacks, in both normal and error exits from subtransactions. However this isn't very safe, as shown in recent trouble report from Frank van Vugt, in which releasing a tupledesc refcount failed. It's also unnecessary, since the resources that callbacks might wish to release should be cleaned up by other error recovery mechanisms (ie the resource owners). We only really want FreeExprContext to release memory attached to the exprcontext in the error-exit case. So, add a bool parameter to FreeExprContext to tell it not to call the callbacks. A more general solution would be to pass the isCommit bool parameter on to the callbacks, so they could do only safe things during error exit. But that would make the patch significantly more invasive and possibly break third-party code that registers ExprContextCallback callbacks. We might want to do that later in HEAD, but for now I'll just do what seems reasonable to back-patch.
* 8.4 pgindent run, with new combined Linux/FreeBSD/MinGW typedef listBruce Momjian2009-06-11
| | | | provided by Andrew.
* Refactor ExecProject and associated routines so that fast-path code is usedTom Lane2009-04-02
| | | | | | | | | for simple Var targetlist entries all the time, even when there are other entries that are not simple Vars. Also, ensure that we prefetch attributes (with slot_getsomeattrs) for all Vars in the targetlist, even those buried within expressions. In combination these changes seem to significantly reduce the runtime for cases where tlists are mostly but not exclusively Vars. Per my proposal of yesterday.
* Update copyright for 2009.Bruce Momjian2009-01-01
|
* As noted by Andrew Gierth, there's really no need any more to force a junkTom Lane2008-07-26
| | | | | | | | | | | | | | | | filter to be used when INSERT or SELECT INTO has a plan that returns raw disk tuples. The virtual-tuple-slot optimizations that were put in place awhile ago mean that ExecInsert has to do ExecMaterializeSlot, and that already copies the tuple if it's raw (and does so more efficiently than a junk filter, too). So get rid of that logic. This in turn means that we can throw away ExecMayReturnRawTuples, which wasn't used for any other purpose, and was always a kluge anyway. In passing, move a couple of SELECT-INTO-specific fields out of EState and into the private state of the SELECT INTO DestReceiver, as was foreseen in an old comment there. Also make intorel_receive use ExecMaterializeSlot not ExecCopySlotTuple, for consistency with ExecInsert and to possibly save a tuple copy step in some cases.
* Move the HTSU_Result enum definition into snapshot.h, to avoid includingAlvaro Herrera2008-03-26
| | | | | | tqual.h into heapam.h. This makes all inclusion of tqual.h explicit. I also sorted alphabetically the includes on some source files.
* Update copyrights in source tree to 2008.Bruce Momjian2008-01-01
|
* Avoid incrementing the CommandCounter when CommandCounterIncrement is calledTom Lane2007-11-30
| | | | | | | | | | | | | | | | | | | | but no database changes have been made since the last CommandCounterIncrement. This should result in a significant improvement in the number of "commands" that can typically be performed within a transaction before hitting the 2^32 CommandId size limit. In particular this buys back (and more) the possible adverse consequences of my previous patch to fix plan caching behavior. The implementation requires tracking whether the current CommandCounter value has been "used" to mark any tuples. CommandCounter values stored into snapshots are presumed not to be used for this purpose. This requires some small executor changes, since the executor used to conflate the curcid of the snapshot it was using with the command ID to mark output tuples with. Separating these concepts allows some small simplifications in executor APIs. Something for the TODO list: look into having CommandCounterIncrement not do AcceptInvalidationMessages. It seems fairly bogus to be doing it there, but exactly where to do it instead isn't clear, and I'm disinclined to mess with asynchronous behavior during late beta.
* pgindent run for 8.3.Bruce Momjian2007-11-15
|
* HOT updates. When we update a tuple without changing any of its indexedTom Lane2007-09-20
| | | | | | | | | | | | columns, and the new version can be stored on the same heap page, we no longer generate extra index entries for the new version. Instead, index searches follow the HOT-chain links to ensure they find the correct tuple version. In addition, this patch introduces the ability to "prune" dead tuples on a per-page basis, without having to do a complete VACUUM pass to recover space. VACUUM is still needed to clean up dead index entries, however. Pavan Deolasee, with help from a bunch of other people.
* Arrange to cache a ResultRelInfo in the executor's EState for relations thatTom Lane2007-08-15
| | | | | | | | | | | | | are not one of the query's defined result relations, but nonetheless have triggers fired against them while the query is active. This was formerly impossible but can now occur because of my recent patch to fix the firing order for RI triggers. Caching a ResultRelInfo avoids duplicating work by repeatedly opening and closing the same relation, and also allows EXPLAIN ANALYZE to "see" and report on these extra triggers. Use the same mechanism to cache open relations when firing deferred triggers at transaction shutdown; this replaces the former one-element-cache strategy used in that case, and should improve performance a bit when there are deferred triggers on a number of relations.
* If we're gonna use ExecRelationIsTargetRelation here, might as wellTom Lane2007-07-31
| | | | simplify a bit further.
* Slight refactor for ExecOpenScanRelation(): we can useNeil Conway2007-07-27
| | | | | ExecRelationIsTargetRelation() to check if the relation is a target rel, rather than scanning through the result relation array ourselves.
* Get rid of the separate EState for subplans, and just let them share theTom Lane2007-02-27
| | | | | | | | | parent query's EState. Now that there's a single flat rangetable for both the main plan and subplans, there's no need anymore for a separate EState, and removing it allows cleaning up some crufty code in nodeSubplan.c and nodeSubqueryscan.c. Should be a tad faster too, although any difference will probably be hard to measure. This is the last bit of subsidiary mop-up work from changing to a flat rangetable.
* Turn the rangetable used by the executor into a flat list, and avoid storingTom Lane2007-02-22
| | | | | | | | | | | | | | | | | useless substructure for its RangeTblEntry nodes. (I chose to keep using the same struct node type and just zero out the link fields for unneeded info, rather than making a separate ExecRangeTblEntry type --- it seemed too fragile to have two different rangetable representations.) Along the way, put subplans into a list in the toplevel PlannedStmt node, and have SubPlan nodes refer to them by list index instead of direct pointers. Vadim wanted to do that years ago, but I never understood what he was on about until now. It makes things a *whole* lot more robust, because we can stop worrying about duplicate processing of subplans during expression tree traversals. That's been a constant source of bugs, and it's finally gone. There are some consequent simplifications yet to be made, like not using a separate EState for subplans in the executor, but I'll tackle that later.
* Remove the Query structure from the executor's API. This allows us to stopTom Lane2007-02-20
| | | | | | | | | | | | | | | storing mostly-redundant Query trees in prepared statements, portals, etc. To replace Query, a new node type called PlannedStmt is inserted by the planner at the top of a completed plan tree; this carries just the fields of Query that are still needed at runtime. The statement lists kept in portals etc. now consist of intermixed PlannedStmt and bare utility-statement nodes --- no Query. This incidentally allows us to remove some fields from Query and Plan nodes that shouldn't have been there in the first place. Still to do: simplify the execution-time range table; at the moment the range table passed to the executor still contains Query trees for subqueries. initdb forced due to change of stored rules.
* Remove typmod checking from the recent security-related patches. It turnsTom Lane2007-02-06
| | | | | | | | | | | | | out that ExecEvalVar and friends don't necessarily have access to a tuple descriptor with correct typmod: it definitely can contain -1, and possibly might contain other values that are different from the Var's value. Arguably this should be cleaned up someday, but it's not a simple change, and in any case typmod discrepancies don't pose a security hazard. Per reports from numerous people :-( I'm not entirely sure whether the failure can occur in 8.0 --- the simple test cases reported so far don't trigger it there. But back-patch the change all the way anyway.
* Repair failure to check that a table is still compatible with a previouslyTom Lane2007-02-02
| | | | | | | | | | | | | | | | | | | | | | made query plan. Use of ALTER COLUMN TYPE creates a hazard for cached query plans: they could contain Vars that claim a column has a different type than it now has. Fix this by checking during plan startup that Vars at relation scan level match the current relation tuple descriptor. Since at that point we already have at least AccessShareLock, we can be sure the column type will not change underneath us later in the query. However, since a backend's locks do not conflict against itself, there is still a hole for an attacker to exploit: he could try to execute ALTER COLUMN TYPE while a query is in progress in the current backend. Seal that hole by rejecting ALTER TABLE whenever the target relation is already open in the current backend. This is a significant security hole: not only can one trivially crash the backend, but with appropriate misuse of pass-by-reference datatypes it is possible to read out arbitrary locations in the server process's memory, which could allow retrieving database content the user should not be able to see. Our thanks to Jeff Trout for the initial report. Security: CVE-2007-0556
* Update CVS HEAD for 2007 copyright. Back branches are typically notBruce Momjian2007-01-05
| | | | back-stamped for this.
* Fix failure due to accessing an already-freed tuple descriptor in a planTom Lane2006-12-26
| | | | | | | | | | | | involving HashAggregate over SubqueryScan (this is the known case, there may well be more). The bug is only latent in releases before 8.2 since they didn't try to access tupletable slots' descriptors during ExecDropTupleTable. The least bogus fix seems to be to make subqueries share the parent query's memory context, so that tupdescs they create will have the same lifespan as those of the parent query. There are comments in the code envisioning going even further by not having a separate child EState at all, but that will require rethinking executor access to range tables, which I don't want to tackle right now. Per bug report from Jean-Pierre Pelletier.
* pgindent run for 8.2.Bruce Momjian2006-10-04
|