aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
Commit message (Collapse)AuthorAge
* Rearrange code in ExecInitBitmapHeapScan so that we don't initialize theTom Lane2005-12-02
| | | | | | | | child plan nodes until we have acquired lock on the relation to scan. The relative order of initialization of plan nodes isn't real important in other cases, but it's critical here because one is supposed to lock a relation before its indexes, not vice versa. The original coding was at least vulnerable to deadlock against DROP INDEX, and perhaps worse things.
* Tweak hash join code to use an additional heuristic for deciding whetherTom Lane2005-11-28
| | | | | | | | it's worth probing the outer relation for emptiness before building the hash table. To wit, if we're rescanning a join previously performed, remember whether we found it nonempty the previous time, and don't bother with the probe if it was nonempty. This buys back the performance lost in examples like Mario Weilguni's.
* Recent changes to allow hash join to exit early given empty input fromTom Lane2005-11-28
| | | | | | | one child or the other had a problem: they did not leave the node in a state that ExecReScanHashJoin would understand. In particular it would tend to fail to reset the child plans when needed. Per report from Mario Weilguni.
* Get rid of ExecAssignResultTypeFromOuterPlan() and make all plan node typesTom Lane2005-11-23
| | | | | | | | | | | generate their output tuple descriptors from their target lists (ie, using ExecAssignResultTypeFromTL()). We long ago fixed things so that all node types have minimally valid tlists, so there's no longer any good reason to have two different ways of doing it. This change is needed to fix bug reported by Hayden James: the fix of 2005-11-03 to emit the correct column names after optimizing away a SubqueryScan node didn't work if the new top-level plan node used ExecAssignResultTypeFromOuterPlan to generate its tupdesc, since the next plan node down won't have the correct column labels.
* Re-run pgindent, fixing a problem where comment lines after a blankBruce Momjian2005-11-22
| | | | | | | | | comment line where output as too long, and update typedefs for /lib directory. Also fix case where identifiers were used as variable names in the backend, but as typedefs in ecpg (favor the backend for indenting). Backpatch to 8.1.X.
* Modify tuptoaster's API so that it does not try to modify the passedTom Lane2005-11-20
| | | | | | | | | tuple in-place, but instead passes back an all-new tuple structure if any changes are needed. This is a much cleaner and more robust solution for the bug discovered by Alexey Beschiokov; accordingly, revert the quick hack I installed yesterday. With this change, HeapTupleData.t_datamcxt is no longer needed; will remove it in a separate commit in HEAD only.
* Stopgap solution for problem reported by Alexey Beschiokov: afterTom Lane2005-11-19
| | | | | | | doing heap_insert or heap_update, wipe out any extracted fields in the TupleTableSlot containing the tuple, because they might not be valid anymore if tuptoaster.c changed the tuple. Safe because slot must be in the materialized state, but mighty ugly --- find a better answer!
* Prevent ExecInsert() and ExecUpdate() from scribbling on the result tupleTom Lane2005-11-14
| | | | | | | slot of the topmost plan node when a trigger returns a modified tuple. These appear to be the only places where a plan node's caller did not treat the result slot as read-only, which is an assumption that nodeUnique makes as of 8.1. Fixes trigger-vs-DISTINCT bug reported by Frank van Vugt.
* Rename the members of CommandDest enum so they don't collide with other uses ofAlvaro Herrera2005-11-03
| | | | | those names. (Debug and None were pretty bad names anyway.) I hope I catched all uses of the names in comments too.
* Better solution to the problem of labeling whole-row Datums that areTom Lane2005-10-19
| | | | | | generated from subquery outputs: use the type info stored in the Var itself. To avoid making ExecEvalVar and slot_getattr more complex and slower, I split out the whole-row case into a separate ExecEval routine.
* Ensure that the Datum generated from a whole-row Var contains validTom Lane2005-10-19
| | | | | | type ID information even when it's a record type. This is needed to handle whole-row Vars referencing subquery outputs. Per example from Richard Huxton.
* A few trivial code cleanups motivated by reading warnings generatedTom Lane2005-10-18
| | | | | by a recent HP C compiler. Mostly, get rid of useless local variables that are assigned to but never used.
* Standard pgindent run for 8.1.Bruce Momjian2005-10-15
|
* Revise pgstats stuff to fix the problems with not counting accessesTom Lane2005-10-06
| | | | | | | generated by bitmap index scans. Along the way, simplify and speed up the code for counting sequential and index scans; it was both confusing and inefficient to be taking care of that in the per-tuple loops, IMHO. initdb forced because of internal changes in pg_stat view definitions.
* _SPI_execute_plan failed to return result tuple table to caller inTom Lane2005-10-01
| | | | | | | | | the ProcessUtility case, resulting in an intratransaction memory leak if a utility command actually did return any tuples, as reported by Dmitry Karasik. Fix this and also make the behavior more consistent for cases involving nested SPI operations and multiple query trees, by ensuring that we store the state locally until it is ready to be returned to the caller.
* The original patch to avoid building a hash join's hashtable when theTom Lane2005-09-25
| | | | | | | | outer relation is empty did not work, per test case from Patrick Welche. It tried to use nodeHashjoin.c's high-level mechanisms for fetching an outer-relation tuple, but that code expected the hash table to be filled already. As patched, the code failed in corner cases such as having no outer-relation tuples for the first hash batch. Revert and rewrite.
* Remove some dead code.Tom Lane2005-09-22
|
* Tweak nodeBitmapAnd to stop evaluating sub-plan scans if it finds it'sTom Lane2005-08-28
| | | | | got an empty bitmap after any step; the remaining subplans can no longer affect the result. Per a suggestion from Ilia Kantor.
* Arrange for indexes and toast tables to inherit their ownership fromTom Lane2005-08-26
| | | | | | the parent table, even if the command that creates them is executed by someone else (such as a superuser or a member of the owning role). Per gripe from Michael Fuhr.
* Repair problems with VACUUM destroying t_ctid chains too soon, and withTom Lane2005-08-20
| | | | | | | | | | | | insufficient paranoia in code that follows t_ctid links. (We must do both because even with VACUUM doing it properly, the intermediate state with a dangling t_ctid link is visible concurrently during lazy VACUUM, and could be seen afterwards if either type of VACUUM crashes partway through.) Also try to improve documentation about what's going on. Patch is a bit bulky because passing the XMAX information around required changing the APIs of some low-level heapam.c routines, but it's not conceptually very complicated. Per trouble report from Teodor and subsequent analysis. This needs to be back-patched, but I'll do that after 8.1 beta is out.
* Update some obsolete comments --- code is using t_self now, not t_ctid.Tom Lane2005-08-18
|
* Add NOWAIT option to SELECT FOR UPDATE/SHARE.Tom Lane2005-08-01
| | | | | Original patch by Hans-Juergen Schoenig, revisions by Karel Zak and Tom Lane.
* Replace pg_shadow and pg_group by new role-capable catalogs pg_authidTom Lane2005-06-28
| | | | | | | | and pg_auth_members. There are still many loose ends to finish in this patch (no documentation, no regression tests, no pg_dump support for instance). But I'm going to commit it now anyway so that Alvaro can make some progress on shared dependencies. The catalog changes should be pretty much done.
* Add Oracle-compatible GREATEST and LEAST functions. Pavel StehuleTom Lane2005-06-26
|
* Avoid WAL-logging individual tuple insertions during CREATE TABLE ASTom Lane2005-06-20
| | | | | | (a/k/a SELECT INTO). Instead, flush and fsync the whole relation before committing. We do still need the WAL log when PITR is active, however. Simon Riggs and Tom Lane.
* Change the implementation of hash join to attempt to avoid unnecessaryNeil Conway2005-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | work if either of the join relations are empty. The logic is: (1) if the inner relation's startup cost is less than the outer relation's startup cost and this is not an outer join, read a single tuple from the inner relation via ExecHash() - if NULL, we're done (2) read a single tuple from the outer relation - if NULL, we're done (3) build the hash table on the inner relation - if hash table is empty and this is not an outer join, we're done (4) otherwise, do hash join as usual The implementation uses the new MultiExecProcNode API, per a suggestion from Tom: invoking ExecHash() now produces the first tuple from the Hash node's child node, whereas MultiExecHash() builds the hash table. I had to put in a bit of a kludge to get the row count returned for EXPLAIN ANALYZE to be correct: since ExecHash() is invoked to return a tuple, and then MultiExecHash() is invoked, we would return one too many tuples to EXPLAIN ANALYZE. I hacked around this by just manually detecting this situation and subtracting 1 from the EXPLAIN ANALYZE row count.
* Make SPI set SPI_processed for CREATE TABLE AS / SELECT INTO commands;Tom Lane2005-06-09
| | | | | | this in turn causes CREATE TABLE AS in plpgsql to set ROW_COUNT. This is how it behaved before 7.4; I had unintentionally changed the behavior in a bit of sloppy micro-optimization.
* Modify hash_search() API to prevent future occurrences of the errorTom Lane2005-05-29
| | | | | | | | | | | | | spotted by Qingqing Zhou. The HASH_ENTER action now automatically fails with elog(ERROR) on out-of-memory --- which incidentally lets us eliminate duplicate error checks in quite a bunch of places. If you really need the old return-NULL-on-out-of-memory behavior, you can ask for HASH_ENTER_NULL. But there is now an Assert in that path checking that you aren't hoping to get that behavior in a palloc-based hash table. Along the way, remove the old HASH_FIND_SAVE/HASH_REMOVE_SAVED actions, which were not being used anywhere anymore, and were surely too ugly and unsafe to want to see revived again.
* Teach the planner to remove SubqueryScan nodes from the plan if theyTom Lane2005-05-22
| | | | | | | | | aren't doing anything useful (ie, neither selection nor projection). Also, extend to SubqueryScan the hacks already in place to avoid unnecessary ExecProject calls when the result would just be the same tuple the subquery already delivered. This saves some overhead in UNION and other set operations, as well as avoiding overhead for unflatten-able subqueries. Per example from Sokolov Yura.
* Fix latent bug in ExecSeqRestrPos: it leaves the plan node's result slotTom Lane2005-05-15
| | | | | | | in an inconsistent state. (This is only latent because in reality ExecSeqRestrPos is dead code at the moment ... but someday maybe it won't be.) Add some comments about what the API for plan node mark/restore actually is, because it's not immediately obvious.
* Minor refactoring to eliminate duplicate code and make startup aTom Lane2005-05-14
| | | | tad faster.
* Revise nodeMergejoin in light of example provided by Guillaume Smet.Tom Lane2005-05-13
| | | | | | | | | | When one side of the join has a NULL, we don't want to uselessly try to match it against every remaining tuple of the other side. While at it, rewrite the comparison machinery to avoid multiple evaluations of the left and right input expressions and to use a btree comparator where available, instead of double operator calls. Also revise the state machine to eliminate redundant comparisons and hopefully make it more readable too.
* Remove some unnecessary code: since ExecMakeFunctionResultNoSets does notTom Lane2005-05-12
| | | | | want to handle set inputs, it should just pass NULL for isDone, not make its own failure check.
* Add some defenses against functions declared to return set that don'tTom Lane2005-05-09
| | | | actually follow the protocol; per example from Kris Jurka.
* For some reason access/tupmacs.h has been #including utils/memutils.h,Tom Lane2005-05-06
| | | | | | | which is neither needed by nor related to that header. Remove the bogus inclusion and instead include the header in those C files that actually need it. Also fix unnecessary inclusions and bad inclusion order in tsearch2 files.
* Adjust nodeBitmapIndexscan to keep the target index opened from planTom Lane2005-05-05
| | | | | | | | | startup to end, rather than re-opening it in each MultiExecBitmapIndexScan call. I had foolishly thought that opening/closing wouldn't be much more expensive than a rescan call, but that was sheer brain fade. This seems to fix about half of the performance lossage reported by Sergey Koposov. I'm still not sure where the other half went.
* Change SPI functions to use a `long' when specifying the number of tuplesNeil Conway2005-05-02
| | | | | | | | to produce when running the executor. This is consistent with the internal executor APIs (such as ExecutorRun), which also use a long for this purpose. It also allows FETCH_ALL to be passed -- since FETCH_ALL is defined as LONG_MAX, this wouldn't have worked on platforms where int and long are of different sizes. Per report from Tzahi Fadida.
* Change CREATE TYPE to require datatype output and send functions to haveTom Lane2005-05-01
| | | | | | | only one argument. (Per recent discussion, the option to accept multiple arguments is pretty useless for user-defined types, and would be a likely source of security holes if it was used.) Simplify call sites of output/send functions to not bother passing more than one argument.
* Implement sharable row-level locks, and use them for foreign key referencesTom Lane2005-04-28
| | | | | | | | | | | | | | | to eliminate unnecessary deadlocks. This commit adds SELECT ... FOR SHARE paralleling SELECT ... FOR UPDATE. The implementation uses a new SLRU data structure (managed much like pg_subtrans) to represent multiple- transaction-ID sets. When more than one transaction is holding a shared lock on a particular row, we create a MultiXactId representing that set of transactions and store its ID in the row's XMAX. This scheme allows an effectively unlimited number of row locks, just as we did before, while not costing any extra overhead except when a shared lock actually has to be shared. Still TODO: use the regular lock manager to control the grant order when multiple backends are waiting for a row lock. Alvaro Herrera and Tom Lane.
* Remove support for OR'd indexscans internal to a single IndexScan planTom Lane2005-04-25
| | | | | | | | node, as this behavior is now better done as a bitmap OR indexscan. This allows considerable simplification in nodeIndexscan.c itself as well as several planner modules concerned with indexscan plan generation. Also we can improve the sharing of code between regular and bitmap indexscans, since they are now working with nigh-identical Plan nodes.
* Adjust nodeBitmapIndexscan.c to not keep the index open across calls,Tom Lane2005-04-24
| | | | | | | | but just to open and close it during MultiExecBitmapIndexScan. This avoids acquiring duplicate resources (eg, multiple locks on the same relation) in a tree with many bitmap scans. Also, don't bother to lock the parent heap at all here, since we must be underneath a BitmapHeapScan node that will be holding a suitable lock.
* Actually, nodeBitmapIndexscan.c doesn't need to create a standardTom Lane2005-04-24
| | | | ExprContext at all, since it never evaluates any qual or tlist expressions.
* Put back example of using Result node to execute an INSERT.Tom Lane2005-04-24
|
* Update some comments to use SQL examples rather than QUEL. From SimonNeil Conway2005-04-24
| | | | Riggs.
* Remove explicit FreeExprContext calls during plan node shutdown. TheTom Lane2005-04-23
| | | | | | | | | | | | ExprContexts will be freed anyway when FreeExecutorState() is reached, and letting that routine do the work is more efficient because it will automatically free the ExprContexts in reverse creation order. The existing coding was effectively freeing them in exactly the worst possible order, resulting in O(N^2) behavior inside list_delete_ptr, which becomes highly visible in cases with a few thousand plan nodes. ExecFreeExprContext is now effectively a no-op and could be removed, but I left it in place in case we ever want to put it back to use.
* First cut at planner support for bitmap index scans. Lots to do yet,Tom Lane2005-04-22
| | | | | | | | but the code is basically working. Along the way, rewrite the entire approach to processing OR index conditions, and make it work in join cases for the first time ever. orindxpath.c is now basically obsolete, but I left it in for the time being to allow easy comparison testing against the old implementation.
* Minor performance improvement: avoid unnecessary creation/unioning ofTom Lane2005-04-20
| | | | | bitmaps for multiple indexscans. Instead just let each indexscan add TIDs directly into the BitmapOr node's result bitmap.
* Create executor and planner-backend support for decoupled heap and indexTom Lane2005-04-19
| | | | | | | | | scans, using in-memory tuple ID bitmaps as the intermediary. The planner frontend (path creation and cost estimation) is not there yet, so none of this code can be executed. I have tested it using some hacked planner code that is far too ugly to see the light of day, however. Committing now so that the bulk of the infrastructure changes go in before the tree drifts under me.
* Create a new 'MultiExecProcNode' call API for plan nodes that don'tTom Lane2005-04-16
| | | | | | | return just a single tuple at a time. Currently the only such node type is Hash, but I expect we will soon have indexscans that can return tuple bitmaps. A side benefit is that EXPLAIN ANALYZE now shows the correct tuple count for a Hash node.
* Put back blessing of record-function tupledesc, which I removed in aTom Lane2005-04-14
| | | | fit of over-optimization.