aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeBitmapHeapscan.c
Commit message (Collapse)AuthorAge
...
* Update copyrights in source tree to 2008.Bruce Momjian2008-01-01
|
* pgindent run for 8.3.Bruce Momjian2007-11-15
|
* HOT updates. When we update a tuple without changing any of its indexedTom Lane2007-09-20
| | | | | | | | | | | | columns, and the new version can be stored on the same heap page, we no longer generate extra index entries for the new version. Instead, index searches follow the HOT-chain links to ensure they find the correct tuple version. In addition, this patch introduces the ability to "prune" dead tuples on a per-page basis, without having to do a complete VACUUM pass to recover space. VACUUM is still needed to clean up dead index entries, however. Pavan Deolasee, with help from a bunch of other people.
* Redefine the lp_flags field of item pointers as having four states, ratherTom Lane2007-09-12
| | | | | | | | | than two independent bits (one of which was never used in heap pages anyway, or at least hadn't been in a very long time). This gives us flexibility to add the HOT notions of redirected and dead item pointers without requiring anything so klugy as magic values of lp_off and lp_len. The state values are chosen so that for the states currently in use (pre-HOT) there is no change in the physical representation.
* Teach heapam code to know the difference between a real seqscan and theTom Lane2007-06-09
| | | | | | | | | pseudo HeapScanDesc created for a bitmap heap scan. This avoids some useless overhead during a bitmap scan startup, in particular invoking the syncscan code. (We might someday want to do that, but right now it's merely useless contention for shared memory, to say nothing of possibly pushing useful entries out of syncscan's small LRU list.) This also allows elimination of ugly pgstat_discount_heap_scan() kluge.
* Fix up pgstats counting of live and dead tuples to recognize that committedTom Lane2007-05-27
| | | | | | | | | | | and aborted transactions have different effects; also teach it not to assume that prepared transactions are always committed. Along the way, simplify the pgstats API by tying counting directly to Relations; I cannot detect any redeeming social value in having stats pointers in HeapScanDesc and IndexScanDesc structures. And fix a few corner cases in which counts might be missed because the relation's pgstat_info pointer hadn't been set.
* Update CVS HEAD for 2007 copyright. Back branches are typically notBruce Momjian2007-01-05
| | | | back-stamped for this.
* Repair bug #2839: the various ExecReScan functions need to resetTom Lane2006-12-26
| | | | | | | | | ps_TupFromTlist in plan nodes that make use of it. This was being done correctly in join nodes and Result nodes but not in any relation-scan nodes. Bug would lead to bogus results if a set-returning function appeared in the targetlist of a subquery that could be rescanned after partial execution, for example a subquery within EXISTS(). Bug has been around forever :-( ... surprising it wasn't reported before.
* pgindent run for 8.2.Bruce Momjian2006-10-04
|
* Remove 576 references of include files that were not needed.Bruce Momjian2006-07-14
|
* Fix problems with cached tuple descriptors disappearing while still in useTom Lane2006-06-16
| | | | | | | | | | by creating a reference-count mechanism, similar to what we did a long time ago for catcache entries. The back branches have an ugly solution involving lots of extra copies, but this way is more efficient. Reference counting is only applied to tupdescs that are actually in caches --- there seems no need to use it for tupdescs that are generated in the executor, since they'll go away during plan shutdown by virtue of being in the per-query memory context. Neil Conway and Tom Lane
* Remove CXT_printf/CXT1_printf macros. If anyone had found them to be ofTom Lane2006-05-23
| | | | | | | | | any use in the past many years, we'd have made some effort to include them in all executor node types; but in fact they were only in nodeAppend.c and nodeIndexscan.c, up until I copied nodeIndexscan.c's occurrence into the new bitmap node types. Remove some other unused macros in execdebug.h, too. Some day the whole header probably ought to go away in favor of better-designed facilities.
* Update copyright for 2006. Update scripts.Bruce Momjian2006-03-05
|
* Extend the ExecInitNode API so that plan nodes receive a set of flagTom Lane2006-02-28
| | | | | | | | | | | | bits indicating which optional capabilities can actually be exercised at runtime. This will allow Sort and Material nodes, and perhaps later other nodes, to avoid unnecessary overhead in common cases. This commit just adds the infrastructure and arranges to pass the correct flag values down to plan nodes; none of the actual optimizations are here yet. I'm committing this separately in case anyone wants to measure the added overhead. (It should be negligible.) Simon Riggs and Tom Lane
* Adjust scan plan nodes to avoid getting an extra AccessShareLock on aTom Lane2005-12-02
| | | | | | relation if it's already been locked by execMain.c as either a result relation or a FOR UPDATE/SHARE relation. This avoids an extra trip to the shared lock manager state. Per my suggestion yesterday.
* Rearrange code in ExecInitBitmapHeapScan so that we don't initialize theTom Lane2005-12-02
| | | | | | | | child plan nodes until we have acquired lock on the relation to scan. The relative order of initialization of plan nodes isn't real important in other cases, but it's critical here because one is supposed to lock a relation before its indexes, not vice versa. The original coding was at least vulnerable to deadlock against DROP INDEX, and perhaps worse things.
* Change seqscan logic so that we check visibility of all tuples on a pageTom Lane2005-11-26
| | | | | | | | | | | | | | when we first read the page, rather than checking them one at a time. This allows us to take and release the buffer content lock just once per page, instead of once per tuple. Since it's a shared lock the contention penalty for holding the lock longer shouldn't be too bad. We can safely do this only when using an MVCC snapshot; else the assumption that visibility won't change over time is uncool. Therefore there are now two code paths depending on the snapshot type. I also made the same change in nodeBitmapHeapscan.c, where it can be done always because we only support MVCC snapshots for bitmap scans anyway. Also make some incidental cleanups in the APIs of these functions. Per a suggestion from Qingqing Zhou.
* Improve ExecStoreTuple to be smarter about replacing the contents ofTom Lane2005-11-25
| | | | | | | | | | | | | | a TupleTableSlot: instead of calling ExecClearTuple, inline the needed operations, so that we can avoid redundant steps. In particular, when the old and new tuples are both on the same disk page, avoid releasing and re-acquiring the buffer pin --- this saves work in both the bufmgr and ResourceOwner modules. To make this improvement actually useful, partially revert a change I made on 2004-04-21 that caused SeqNext et al to call ExecClearTuple before ExecStoreTuple. The motivation for that, to avoid grabbing the BufMgrLock separately for releasing the old buffer and grabbing the new one, no longer applies. My profiling says that this saves about 5% of the CPU time for an all-in-memory seqscan.
* Standard pgindent run for 8.1.Bruce Momjian2005-10-15
|
* Revise pgstats stuff to fix the problems with not counting accessesTom Lane2005-10-06
| | | | | | | generated by bitmap index scans. Along the way, simplify and speed up the code for counting sequential and index scans; it was both confusing and inefficient to be taking care of that in the per-tuple loops, IMHO. initdb forced because of internal changes in pg_stat view definitions.
* For some reason access/tupmacs.h has been #including utils/memutils.h,Tom Lane2005-05-06
| | | | | | | which is neither needed by nor related to that header. Remove the bogus inclusion and instead include the header in those C files that actually need it. Also fix unnecessary inclusions and bad inclusion order in tsearch2 files.
* Create executor and planner-backend support for decoupled heap and indexTom Lane2005-04-19
scans, using in-memory tuple ID bitmaps as the intermediary. The planner frontend (path creation and cost estimation) is not there yet, so none of this code can be executed. I have tested it using some hacked planner code that is far too ugly to see the light of day, however. Committing now so that the bulk of the infrastructure changes go in before the tree drifts under me.