| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
which is neither needed by nor related to that header. Remove the bogus
inclusion and instead include the header in those C files that actually
need it. Also fix unnecessary inclusions and bad inclusion order in
tsearch2 files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to eliminate unnecessary deadlocks. This commit adds SELECT ... FOR SHARE
paralleling SELECT ... FOR UPDATE. The implementation uses a new SLRU
data structure (managed much like pg_subtrans) to represent multiple-
transaction-ID sets. When more than one transaction is holding a shared
lock on a particular row, we create a MultiXactId representing that set
of transactions and store its ID in the row's XMAX. This scheme allows
an effectively unlimited number of row locks, just as we did before,
while not costing any extra overhead except when a shared lock actually
has to be shared. Still TODO: use the regular lock manager to control
the grant order when multiple backends are waiting for a row lock.
Alvaro Herrera and Tom Lane.
|
| |
|
|
|
|
|
|
| |
or bitmap), use pred_test to be a little smarter about cases where a
filter clause is logically unnecessary. This may be overkill for the
plain indexscan case, but it's definitely useful for OR'd bitmap scans.
|
|
|
|
|
| |
more efficient routine in restrictinfo.c (which can make use of
make_restrictinfo_internal).
|
|
|
|
|
|
|
|
| |
node, as this behavior is now better done as a bitmap OR indexscan.
This allows considerable simplification in nodeIndexscan.c itself as
well as several planner modules concerned with indexscan plan generation.
Also we can improve the sharing of code between regular and bitmap
indexscans, since they are now working with nigh-identical Plan nodes.
|
|
|
|
|
|
|
|
| |
code in prepqual.c had a small drawback: the flatten_andors code was
able to cope with deeply nested AND/OR structures (like 10000 ORs in
a row), whereas eval_const_expressions tends to recurse until it
overruns the stack. Revise eval_const_expressions so that it doesn't
choke on deeply nested ANDs or ORs.
|
|
|
|
|
|
| |
make some estimate of which available indexes to AND together, rather
than blindly taking 'em all. This could probably stand further
improvement, but it seems to do OK in simple tests.
|
|
|
|
| |
BitmapOr nodes.
|
|
|
|
|
|
|
|
| |
but the code is basically working. Along the way, rewrite the entire
approach to processing OR index conditions, and make it work in join
cases for the first time ever. orindxpath.c is now basically obsolete,
but I left it in for the time being to allow easy comparison testing
against the old implementation.
|
|
|
|
|
|
|
| |
logic operations during planning. Seems cleaner to create two new Path
node types, instead --- this avoids duplication of cost-estimation code.
Also, create an enable_bitmapscan GUC parameter to control use of bitmap
plans.
|
| |
|
|
|
|
| |
it. Per report from Marinos Yannikos.
|
|
|
|
|
|
|
|
|
| |
scans, using in-memory tuple ID bitmaps as the intermediary. The planner
frontend (path creation and cost estimation) is not there yet, so none
of this code can be executed. I have tested it using some hacked planner
code that is far too ugly to see the light of day, however. Committing
now so that the bulk of the infrastructure changes go in before the tree
drifts under me.
|
|
|
|
|
| |
isn't presently set up to pass them an expected tuple descriptor. Bug has
been there since 7.3 but was just recently reported by Thomas Hallgren.
|
|
|
|
|
|
|
| |
indexes. Replace all heap_openr and index_openr calls by heap_open
and index_open. Remove runtime lookups of catalog OID numbers in
various places. Remove relcache's support for looking up system
catalogs by name. Bulky but mostly very boring patch ...
|
|
|
|
| |
from index, since the aggregates ignore NULLs.
|
|
|
|
|
|
|
|
| |
be supported for all datatypes. Add CREATE AGGREGATE and pg_dump support
too. Add specialized min/max aggregates for bpchar, instead of depending
on text's min/max, because otherwise the possible use of bpchar indexes
cannot be recognized.
initdb forced because of catalog changes.
|
|
|
|
|
|
| |
into indexscans on matching indexes. For the moment, it only handles
int4 and text datatypes; next step is to add a column to pg_aggregate
so that all MIN/MAX aggregates can be handled. Per my recent proposal.
|
|
|
|
|
| |
that is a plain NULL and not a COALESCE with no inputs. Fixes crash
reported by Michael Williamson.
|
|
|
|
|
|
|
| |
decides whether to use hashed grouping instead of sort-plus-uniq
grouping. The function needs an annoyingly large number of parameters,
but this still seems like a win for legibility, since it removes over
a hundred lines from grouping_planner (which is still too big :-().
|
|
|
|
|
|
|
|
|
| |
few palloc's. I also chose to eliminate the restype and restypmod fields
entirely, since they are redundant with information stored in the node's
contained expression; re-examining the expression at need seems simpler
and more reliable than trying to keep restype/restypmod up to date.
initdb forced due to change in contents of stored rules.
|
|
|
|
| |
outer side of an outer join. Per andrew@supernews.
|
|
|
|
|
| |
functions with OUT parameters. The various PLs still need work, as does
pg_dump. Rudimentary docs and regression tests included.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
change saves a great deal of space in pg_proc and its primary index,
and it eliminates the former requirement that INDEX_MAX_KEYS and
FUNC_MAX_ARGS have the same value. INDEX_MAX_KEYS is still embedded
in the on-disk representation (because it affects index tuple header
size), but FUNC_MAX_ARGS is not. I believe it would now be possible
to increase FUNC_MAX_ARGS at little cost, but haven't experimented yet.
There are still a lot of vestigial references to FUNC_MAX_ARGS, which
I will clean up in a separate pass. However, getting rid of it
altogether would require changing the FunctionCallInfoData struct,
and I'm not sure I want to buy into that.
|
|
|
|
|
|
|
|
|
|
| |
really ought to run before canonicalize_qual, because it can now produce
forms that canonicalize_qual knows how to improve (eg, NOT clauses).
Also, because eval_const_expressions already knows about flattening
nested ANDs and ORs into N-argument form, the initial flatten_andors
pass in canonicalize_qual is now completely redundant and can be
removed. This doesn't save a whole lot of code, but the time and
palloc traffic eliminated is a useful gain on large expression trees.
|
|
|
|
|
|
|
|
|
|
| |
access: define new index access method functions 'amgetmulti' that can
fetch multiple TIDs per call. (The functions exist but are totally
untested as yet.) Since I was modifying pg_am anyway, remove the
no-longer-needed 'rel' parameter from amcostestimate functions, and
also remove the vestigial amowner column that was creating useless
work for Alvaro's shared-object-dependencies project.
Initdb forced due to changes in pg_am.
|
|
|
|
|
|
|
| |
that is 'x = true' becomes 'x' and 'x = false' becomes 'NOT x'. This isn't
all that amazingly useful in itself, but it ensures that we will recognize
the different forms as being logically equivalent when checking partial
index predicates. Per example from Patrick Clery.
|
|
|
|
|
|
| |
structs. There are many places in the planner where we were passing
both a rel and an index to subroutines, and now need only pass the
index struct. Notationally simpler, and perhaps a tad faster.
|
|
|
|
|
|
|
|
|
|
|
|
| |
for boolean indexes. Previously we would only use such an index with
WHERE clauses like 'indexkey = true' or 'indexkey = false'. The new
code transforms the cases 'indexkey', 'NOT indexkey', 'indexkey IS TRUE',
and 'indexkey IS FALSE' into one of these. While this is only marginally
useful in itself, I intend soon to change constant-expression simplification
so that 'foo = true' and 'foo = false' are reduced to just 'foo' and
'NOT foo' ... which would lose the ability to use boolean indexes for
such queries at all, if the indexscan machinery couldn't make the
reverse transformation.
|
|
|
|
|
|
|
|
| |
never-yet-vacuumed relation. This restores the pre-8.0 behavior of
avoiding seqscans during initial data loading, while still allowing
reasonable optimization after a table has been vacuumed. Several
regression test cases revert to 7.4-like behavior, which is probably
a good sign. Per gripes from Keith Browne and others.
|
|
|
|
|
|
| |
grouping_planner() to preprocess_targetlist(), according to a comment
in grouping_planner(). I think the refactoring makes sense, and moves
some extraneous details out of grouping_planner().
|
|
|
|
|
|
|
|
|
| |
Formerly, if such a clause contained no aggregate functions we mistakenly
treated it as equivalent to WHERE. Per spec it must cause the query to
be treated as a grouped query of a single group, the same as appearance
of aggregate functions would do. Also, the HAVING filter must execute
after aggregate function computation even if it itself contains no
aggregate functions.
|
|
|
|
|
|
|
| |
on-the-fly, and thereby avoid blowing out memory when the planner has
underestimated the hash table size. Hash join will now obey the
work_mem limit with some faithfulness. Per my recent proposal
(hash aggregate part isn't done yet though).
|
|
|
|
|
|
|
|
|
|
| |
of AND and OR clauses. The key point here is that an OR on the
predicate side has to be treated gingerly: we may be able to prove
that the OR is implied even when no one of its components is implied.
For example (x OR y) implies (x OR y OR z) even though no one of x,
y, or z can be individually proven. This code handles both the
example shown recently by Sergey Koshcheyev and the one shown last
October by Dawid Kuroczko.
|
|
|
|
|
|
| |
indexscans involving partial indexes. These would always be dominated
by a simple indexscan on such an index, so there's no point in considering
them. Fixes overoptimism in a patch I applied last October.
|
|
|
|
|
|
| |
it was in 7.4, and add some comments explaining why it has to be this way.
I broke it for OR'd index predicates in a fit of code cleanup last summer.
Per example from Sergey Koshcheyev.
|
|
|
|
|
|
|
|
| |
form of CASE (eg, CASE 0 WHEN 1 THEN ...) can be constant-folded as it
was in 7.4. Also, avoid constant-folding result expressions that are
certainly unreachable --- the former coding was a bit cavalier about this
and could generate unexpected results for all-constant CASE expressions.
Add regression test cases. Per report from Vlad Marchenko.
|
|
|
|
|
|
| |
look at the actual aggregate transition datatypes and the actual overhead
needed by nodeAgg.c, instead of using pessimistic round numbers.
Per a discussion with Michael Tiemann.
|
|
|
|
|
| |
left input's sorting, because null rows may be inserted at various points.
Per report from Ferenc Lutischá¸n.
|
|
|
|
|
|
|
|
| |
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
|
|
|
|
|
| |
from goranpop@nspoint.net. I think this escaped notice because in
simple cases the list is NIL on entry.
|
|
|
|
|
|
|
|
| |
of an inheritance child table is binary-compatible with the rowtype of
its parent, invent an expression node type that does the conversion
correctly. Fixes the new bug exhibited by Kris Shannon as well as a
lot of old bugs that would only show up when using multiple inheritance
or after altering the parent table.
|
|
|
|
|
|
|
|
|
|
| |
In particular, there was a mathematical tie between the two possible
nestloop-with-materialized-inner-scan plans for a join (ie, we computed
the same cost with either input on the inside), resulting in a roundoff
error driven choice, if the relations were both small enough to fit in
sort_mem. Add a small cost factor to ensure we prefer materializing the
smaller input. This changes several regression test plans, but with any
luck we will now have more stability across platforms.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a relation's number of blocks, rather than the possibly-obsolete value
in pg_class.relpages. Scale the value in pg_class.reltuples correspondingly
to arrive at a hopefully more accurate number of rows. When pg_class
contains 0/0, estimate a tuple width from the column datatypes and divide
that into current file size to estimate number of rows. This improved
methodology allows us to jettison the ancient hacks that put bogus default
values into pg_class when a table is first created. Also, per a suggestion
from Simon, make VACUUM (but not VACUUM FULL or ANALYZE) adjust the value
it puts into pg_class.reltuples to try to represent the mean tuple density
instead of the minimal density that actually prevails just after VACUUM.
These changes alter the plans selected for certain regression tests, so
update the expected files accordingly. (I removed join_1.out because
it's not clear if it still applies; we can add back any variant versions
as they are shown to be needed.)
|
|
|
|
| |
selectivity estimates, per recent discussion.
|
|
|
|
|
|
|
|
|
|
|
| |
estimates when combining the estimates for a range query. As pointed out
by Miquel van Smoorenburg, the existing check for an impossible combined
result would quite possibly fail to detect one default and one non-default
input. It seems better to use the default range query estimate in such
cases. To do so, add a check for an estimate of exactly DEFAULT_INEQ_SEL.
This is a bit ugly because it introduces additional coupling between
clauselist_selectivity and scalarltsel/scalargtsel, but it's not like
there wasn't plenty already...
|
|
|
|
|
|
| |
type-and-length coercion function, make sure that the coercion function
is told the correct typmod. Fixes Kris Jurka's example of a domain
over bit(N).
|
|
|
|
| |
RestrictInfo nodes in the query expression. Per example from James Robinson.
|
| |
|