aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor
Commit message (Collapse)AuthorAge
...
* Improve sorting speed by pre-extracting the first sort-key column ofTom Lane2006-02-26
| | | | | | each tuple, as per my proposal of several days ago. Also, clean up sort memory management by keeping all working data in a separate memory context, and refine the handling of low-memory conditions.
* Cleanup the usage of ScanDirection: use the symbolic names for theNeil Conway2006-02-21
| | | | | | | | | possible ScanDirection alternatives rather than magic numbers (-1, 0, 1). Also, use the ScanDirection macros in a few places rather than directly checking whether `dir == ForwardScanDirection' and the like. Per patch from James William Pye. His patch also changed ScanDirection to be a "char" rather than an enum, which I haven't applied.
* Add TABLESPACE and ON COMMIT clauses to CREATE TABLE AS. ON COMMIT isNeil Conway2006-02-19
| | | | | required by the SQL standard, and TABLESPACE is useful functionality. Patch from Kris Jurka, minor editorialization by Neil Conway.
* Improve my initial, rather hacky implementation of joins to appendTom Lane2006-02-05
| | | | | | | | relations: fix the executor so that we can have an Append plan on the inside of a nestloop and still pass down outer index keys to index scans within the Append, then generate such plans as if they were regular inner indexscans. This avoids the need to evaluate the outer relation multiple times.
* Allow row comparisons to be used as indexscan qualifications.Tom Lane2006-01-25
| | | | This completes the project to upgrade our handling of row comparisons.
* Add a new system view, pg_cursors, that displays the currently availableNeil Conway2006-01-18
| | | | | | | | | | | | | | | | | | | | | cursors. Patch from Joachim Wieland, review and ediorialization by Neil Conway. The view lists cursors defined by DECLARE CURSOR, using SPI, or via the Bind message of the frontend/backend protocol. This means the view does not list the unnamed portal or the portal created to implement EXECUTE. Because we do list SPI portals, there might be more rows in this view than you might expect if you are using SPI implicitly (e.g. via a procedural language). Per recent discussion on -hackers, the query string included in the view for cursors defined by DECLARE CURSOR is based on debug_query_string. That means it is not accurate if multiple queries separated by semicolons are submitted as one query string. However, there doesn't seem a trivial fix for that: debug_query_string is better than nothing. I also changed SPI_cursor_open() to include the source text for the portal it creates: AFAICS there is no reason not to do this. Update the documentation and regression tests, bump the catversion.
* Some minor code cleanup, falling out from the removal of rtree. SK_NEGATETom Lane2006-01-14
| | | | | | | | | | | | isn't being used anywhere anymore, and there seems no point in a generic index_keytest() routine when two out of three remaining access methods aren't using it. Also, add a comment documenting a convention for letting access methods define private flag bits in ScanKey sk_flags. There are no such flags at the moment but I'm thinking about changing btree's handling of "required keys" to use flag bits in the keys rather than a count of required key positions. Also, if some AM did still want SK_NEGATE then it would be reasonable to treat it as a private flag bit.
* Repair "Halloween problem" in EvalPlanQual: a tuple that's been inserted byTom Lane2006-01-12
| | | | | | | | our own command (or more generally, xmin = our xact and cmin >= current command ID) should not be seen as good. Else we may try to update rows we already updated. This error was inserted last August while fixing the even bigger problem that the old coding wouldn't see *any* tuples inserted by our own transaction as good. Per report from Euler Taveira de Oliveira.
* Cosmetic code cleanup: fix a bunch of places that used "return (expr);"Neil Conway2006-01-11
| | | | | | rather than "return expr;" -- the latter style is used in most of the tree. I kept the parentheses when they were necessary or useful because the return expression was complex.
* Add comment explaining why RelationOpenSmgr() call is not needed.Tom Lane2006-01-07
|
* Implement SQL-compliant treatment of row comparisons for < <= > >= casesTom Lane2005-12-28
| | | | | | | | | | | | | | | | (previously we only did = and <> correctly). Also, allow row comparisons with any operators that are in btree opclasses, not only those with these specific names. This gets rid of a whole lot of indefensible assumptions about the behavior of particular operators based on their names ... though it's still true that IN and NOT IN expand to "= ANY". The patch adds a RowCompareExpr expression node type, and makes some changes in the representation of ANY/ALL/ROWCOMPARE SubLinks so that they can share code with RowCompareExpr. I have not yet done anything about making RowCompareExpr an indexable operator, but will look at that soon. initdb forced due to changes in stored rules.
* Fix problem with whole-row Vars referencing sub-select outputs, perTom Lane2005-12-14
| | | | | example from Jim Dew. Add some simple regression tests, since this is an area we seem to break regularly :-(
* Fix a couple of lingering references to POSTQUEL query syntax, per Simon.Tom Lane2005-12-07
|
* Tweak indexscan machinery to avoid taking an AccessShareLock on an indexTom Lane2005-12-03
| | | | | | | if we already have a stronger lock due to the index's table being the update target table of the query. Same optimization I applied earlier at the table level. There doesn't seem to be much interest in the more radical idea of not locking indexes at all, so do what we can ...
* Adjust scan plan nodes to avoid getting an extra AccessShareLock on aTom Lane2005-12-02
| | | | | | relation if it's already been locked by execMain.c as either a result relation or a FOR UPDATE/SHARE relation. This avoids an extra trip to the shared lock manager state. Per my suggestion yesterday.
* Rearrange code in ExecInitBitmapHeapScan so that we don't initialize theTom Lane2005-12-02
| | | | | | | | child plan nodes until we have acquired lock on the relation to scan. The relative order of initialization of plan nodes isn't real important in other cases, but it's critical here because one is supposed to lock a relation before its indexes, not vice versa. The original coding was at least vulnerable to deadlock against DROP INDEX, and perhaps worse things.
* Tweak hash join code to use an additional heuristic for deciding whetherTom Lane2005-11-28
| | | | | | | | it's worth probing the outer relation for emptiness before building the hash table. To wit, if we're rescanning a join previously performed, remember whether we found it nonempty the previous time, and don't bother with the probe if it was nonempty. This buys back the performance lost in examples like Mario Weilguni's.
* Recent changes to allow hash join to exit early given empty input fromTom Lane2005-11-28
| | | | | | | one child or the other had a problem: they did not leave the node in a state that ExecReScanHashJoin would understand. In particular it would tend to fail to reset the child plans when needed. Per report from Mario Weilguni.
* Teach tid-scan code to make use of "ctid = ANY (array)" clauses, so thatTom Lane2005-11-26
| | | | | | | "ctid IN (list)" will still work after we convert IN to ScalarArrayOpExpr. Make some minor efficiency improvements while at it, such as ensuring that multiple TIDs are fetched in physical heap order. And fix EXPLAIN so that it shows what's really going on for a TID scan.
* Change seqscan logic so that we check visibility of all tuples on a pageTom Lane2005-11-26
| | | | | | | | | | | | | | when we first read the page, rather than checking them one at a time. This allows us to take and release the buffer content lock just once per page, instead of once per tuple. Since it's a shared lock the contention penalty for holding the lock longer shouldn't be too bad. We can safely do this only when using an MVCC snapshot; else the assumption that visibility won't change over time is uncool. Therefore there are now two code paths depending on the snapshot type. I also made the same change in nodeBitmapHeapscan.c, where it can be done always because we only support MVCC snapshots for bitmap scans anyway. Also make some incidental cleanups in the APIs of these functions. Per a suggestion from Qingqing Zhou.
* Teach planner and executor to handle ScalarArrayOpExpr as an indexableTom Lane2005-11-25
| | | | | | | | | | | qualification when the underlying operator is indexable and useOr is true. That is, indexkey op ANY (ARRAY[...]) is effectively translated into an OR combination of one indexscan for each array element. This only works for bitmap index scans, of course, since regular indexscans no longer support OR'ing of scans. There are still some loose ends to clean up before changing 'x IN (list)' to translate as a ScalarArrayOpExpr; for instance predtest.c ought to be taught about it. But this gets the basic functionality in place.
* Improve ExecStoreTuple to be smarter about replacing the contents ofTom Lane2005-11-25
| | | | | | | | | | | | | | a TupleTableSlot: instead of calling ExecClearTuple, inline the needed operations, so that we can avoid redundant steps. In particular, when the old and new tuples are both on the same disk page, avoid releasing and re-acquiring the buffer pin --- this saves work in both the bufmgr and ResourceOwner modules. To make this improvement actually useful, partially revert a change I made on 2004-04-21 that caused SeqNext et al to call ExecClearTuple before ExecStoreTuple. The motivation for that, to avoid grabbing the BufMgrLock separately for releasing the old buffer and grabbing the new one, no longer applies. My profiling says that this saves about 5% of the CPU time for an all-in-memory seqscan.
* Get rid of ExecAssignResultTypeFromOuterPlan() and make all plan node typesTom Lane2005-11-23
| | | | | | | | | | | generate their output tuple descriptors from their target lists (ie, using ExecAssignResultTypeFromTL()). We long ago fixed things so that all node types have minimally valid tlists, so there's no longer any good reason to have two different ways of doing it. This change is needed to fix bug reported by Hayden James: the fix of 2005-11-03 to emit the correct column names after optimizing away a SubqueryScan node didn't work if the new top-level plan node used ExecAssignResultTypeFromOuterPlan to generate its tupdesc, since the next plan node down won't have the correct column labels.
* Re-run pgindent, fixing a problem where comment lines after a blankBruce Momjian2005-11-22
| | | | | | | | | comment line where output as too long, and update typedefs for /lib directory. Also fix case where identifiers were used as variable names in the backend, but as typedefs in ecpg (favor the backend for indenting). Backpatch to 8.1.X.
* Remove the t_datamcxt field of HeapTupleData. This was introduced forTom Lane2005-11-20
| | | | | the convenience of tuptoaster.c and is no longer needed, so may as well get rid of some small amount of overhead.
* Modify tuptoaster's API so that it does not try to modify the passedTom Lane2005-11-20
| | | | | | | | | tuple in-place, but instead passes back an all-new tuple structure if any changes are needed. This is a much cleaner and more robust solution for the bug discovered by Alexey Beschiokov; accordingly, revert the quick hack I installed yesterday. With this change, HeapTupleData.t_datamcxt is no longer needed; will remove it in a separate commit in HEAD only.
* Stopgap solution for problem reported by Alexey Beschiokov: afterTom Lane2005-11-19
| | | | | | | doing heap_insert or heap_update, wipe out any extracted fields in the TupleTableSlot containing the tuple, because they might not be valid anymore if tuptoaster.c changed the tuple. Safe because slot must be in the materialized state, but mighty ugly --- find a better answer!
* Update obsolete comment describing ExecDelete(), per Simon Riggs.Neil Conway2005-11-18
|
* Make SQL arrays support null elements. This commit fixes the core arrayTom Lane2005-11-17
| | | | | | | | functionality, but I still need to make another pass looking at places that incidentally use arrays (such as ACL manipulation) to make sure they are null-safe. Contrib needs work too. I have not changed the behaviors that are still under discussion about array comparison and what to do with lower bounds.
* Prevent ExecInsert() and ExecUpdate() from scribbling on the result tupleTom Lane2005-11-14
| | | | | | | slot of the topmost plan node when a trigger returns a modified tuple. These appear to be the only places where a plan node's caller did not treat the result slot as read-only, which is an assumption that nodeUnique makes as of 8.1. Fixes trigger-vs-DISTINCT bug reported by Frank van Vugt.
* Rename the members of CommandDest enum so they don't collide with other uses ofAlvaro Herrera2005-11-03
| | | | | those names. (Debug and None were pretty bad names anyway.) I hope I catched all uses of the names in comments too.
* Better solution to the problem of labeling whole-row Datums that areTom Lane2005-10-19
| | | | | | generated from subquery outputs: use the type info stored in the Var itself. To avoid making ExecEvalVar and slot_getattr more complex and slower, I split out the whole-row case into a separate ExecEval routine.
* Ensure that the Datum generated from a whole-row Var contains validTom Lane2005-10-19
| | | | | | type ID information even when it's a record type. This is needed to handle whole-row Vars referencing subquery outputs. Per example from Richard Huxton.
* A few trivial code cleanups motivated by reading warnings generatedTom Lane2005-10-18
| | | | | by a recent HP C compiler. Mostly, get rid of useless local variables that are assigned to but never used.
* Standard pgindent run for 8.1.Bruce Momjian2005-10-15
|
* Revise pgstats stuff to fix the problems with not counting accessesTom Lane2005-10-06
| | | | | | | generated by bitmap index scans. Along the way, simplify and speed up the code for counting sequential and index scans; it was both confusing and inefficient to be taking care of that in the per-tuple loops, IMHO. initdb forced because of internal changes in pg_stat view definitions.
* _SPI_execute_plan failed to return result tuple table to caller inTom Lane2005-10-01
| | | | | | | | | the ProcessUtility case, resulting in an intratransaction memory leak if a utility command actually did return any tuples, as reported by Dmitry Karasik. Fix this and also make the behavior more consistent for cases involving nested SPI operations and multiple query trees, by ensuring that we store the state locally until it is ready to be returned to the caller.
* The original patch to avoid building a hash join's hashtable when theTom Lane2005-09-25
| | | | | | | | outer relation is empty did not work, per test case from Patrick Welche. It tried to use nodeHashjoin.c's high-level mechanisms for fetching an outer-relation tuple, but that code expected the hash table to be filled already. As patched, the code failed in corner cases such as having no outer-relation tuples for the first hash batch. Revert and rewrite.
* Remove some dead code.Tom Lane2005-09-22
|
* Tweak nodeBitmapAnd to stop evaluating sub-plan scans if it finds it'sTom Lane2005-08-28
| | | | | got an empty bitmap after any step; the remaining subplans can no longer affect the result. Per a suggestion from Ilia Kantor.
* Arrange for indexes and toast tables to inherit their ownership fromTom Lane2005-08-26
| | | | | | the parent table, even if the command that creates them is executed by someone else (such as a superuser or a member of the owning role). Per gripe from Michael Fuhr.
* Repair problems with VACUUM destroying t_ctid chains too soon, and withTom Lane2005-08-20
| | | | | | | | | | | | insufficient paranoia in code that follows t_ctid links. (We must do both because even with VACUUM doing it properly, the intermediate state with a dangling t_ctid link is visible concurrently during lazy VACUUM, and could be seen afterwards if either type of VACUUM crashes partway through.) Also try to improve documentation about what's going on. Patch is a bit bulky because passing the XMAX information around required changing the APIs of some low-level heapam.c routines, but it's not conceptually very complicated. Per trouble report from Teodor and subsequent analysis. This needs to be back-patched, but I'll do that after 8.1 beta is out.
* Update some obsolete comments --- code is using t_self now, not t_ctid.Tom Lane2005-08-18
|
* Add NOWAIT option to SELECT FOR UPDATE/SHARE.Tom Lane2005-08-01
| | | | | Original patch by Hans-Juergen Schoenig, revisions by Karel Zak and Tom Lane.
* Replace pg_shadow and pg_group by new role-capable catalogs pg_authidTom Lane2005-06-28
| | | | | | | | and pg_auth_members. There are still many loose ends to finish in this patch (no documentation, no regression tests, no pg_dump support for instance). But I'm going to commit it now anyway so that Alvaro can make some progress on shared dependencies. The catalog changes should be pretty much done.
* Add Oracle-compatible GREATEST and LEAST functions. Pavel StehuleTom Lane2005-06-26
|
* Avoid WAL-logging individual tuple insertions during CREATE TABLE ASTom Lane2005-06-20
| | | | | | (a/k/a SELECT INTO). Instead, flush and fsync the whole relation before committing. We do still need the WAL log when PITR is active, however. Simon Riggs and Tom Lane.
* Change the implementation of hash join to attempt to avoid unnecessaryNeil Conway2005-06-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | work if either of the join relations are empty. The logic is: (1) if the inner relation's startup cost is less than the outer relation's startup cost and this is not an outer join, read a single tuple from the inner relation via ExecHash() - if NULL, we're done (2) read a single tuple from the outer relation - if NULL, we're done (3) build the hash table on the inner relation - if hash table is empty and this is not an outer join, we're done (4) otherwise, do hash join as usual The implementation uses the new MultiExecProcNode API, per a suggestion from Tom: invoking ExecHash() now produces the first tuple from the Hash node's child node, whereas MultiExecHash() builds the hash table. I had to put in a bit of a kludge to get the row count returned for EXPLAIN ANALYZE to be correct: since ExecHash() is invoked to return a tuple, and then MultiExecHash() is invoked, we would return one too many tuples to EXPLAIN ANALYZE. I hacked around this by just manually detecting this situation and subtracting 1 from the EXPLAIN ANALYZE row count.
* Make SPI set SPI_processed for CREATE TABLE AS / SELECT INTO commands;Tom Lane2005-06-09
| | | | | | this in turn causes CREATE TABLE AS in plpgsql to set ROW_COUNT. This is how it behaved before 7.4; I had unintentionally changed the behavior in a bit of sloppy micro-optimization.
* Modify hash_search() API to prevent future occurrences of the errorTom Lane2005-05-29
| | | | | | | | | | | | | spotted by Qingqing Zhou. The HASH_ENTER action now automatically fails with elog(ERROR) on out-of-memory --- which incidentally lets us eliminate duplicate error checks in quite a bunch of places. If you really need the old return-NULL-on-out-of-memory behavior, you can ask for HASH_ENTER_NULL. But there is now an Assert in that path checking that you aren't hoping to get that behavior in a palloc-based hash table. Along the way, remove the old HASH_FIND_SAVE/HASH_REMOVE_SAVED actions, which were not being used anywhere anymore, and were surely too ugly and unsafe to want to see revived again.