| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
being hidden when current_query is. Relocate it to a column position
more consistent with that behavior. Per discussion.
|
|
|
|
|
|
| |
pg_stat_activity and recorded in log entries.
Dave Page, reviewed by Andres Freund
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
by adding a requirement that build_join_rel add new join RelOptInfos to the
appropriate list immediately at creation. Per report from Robert Haas,
the list_concat_unique_ptr() calls that this change eliminates were taking
the lion's share of the runtime in larger join problems. This doesn't do
anything to fix the fundamental combinatorial explosion in large join
problems, but it should push out the threshold of pain a bit further.
Note: because this changes the order in which joinrel lists are built,
it might result in changes in selected plans in cases where different
alternatives have exactly the same costs. There is one example in the
regression tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
be part of multixacts, so allocate a slot for each prepared transaction in
the "oldest member" array in multixact.c. On PREPARE TRANSACTION, transfer
the oldest member value from the current backends slot to the prepared xact
slot. Also save and recover the value from the 2pc state file.
The symptom of the bug was that after a transaction prepared, a shared lock
still held by the prepared transaction was sometimes ignored by other
transactions.
Fix back to 8.1, where both 2PC and multixact were introduced.
|
|
|
|
| |
Jan Urbanski
|
|
|
|
|
|
|
|
|
|
|
| |
checked to determine whether the trigger should be fired.
For BEFORE triggers this is mostly a matter of spec compliance; but for AFTER
triggers it can provide a noticeable performance improvement, since queuing of
a deferred trigger event and re-fetching of the row(s) at end of statement can
be short-circuited if the trigger does not need to be fired.
Takahiro Itagaki, reviewed by KaiGai Kohei.
|
|
|
|
|
|
|
|
|
| |
output filename if CSV logging was enabled and only one of the two possible
output files got rotated during a particular call (which would, in fact,
typically be the case during a size-based rotation). This would amount to
about MAXPGPATH (1KB) per rotation, and it's been there since the CSV
code was put in, so it's surprising that nobody noticed it before.
Per bug #5196 from Thomas Poindessous.
|
|
|
|
|
|
|
| |
strength of database passwords, and create a sample implementation of
such a hook as a new contrib module "passwordcheck".
Laurenz Albe, reviewed by Takahiro Itagaki
|
|
|
|
|
|
|
|
| |
adopted for EXPLAIN. This will allow additional options to be implemented
in future without having to make them fully-reserved keywords. The old syntax
remains available for existing options, however.
Itagaki Takahiro
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-Var sort/group expressions using ressortgroupref labels instead of
depending entirely on equal()-ity of the upper node's tlist expressions
to the lower node's. This avoids emitting the wrong outputs in cases
where there are textually identical volatile sort/group expressions,
as for example
select distinct random(),random() from generate_series(1,10);
Per report from Andrew Gierth.
Backpatch to 8.4. Arguably this is wrong all the way back, but the only known
case where there's an observable problem is when using hash aggregation to
implement DISTINCT, which is new as of 8.4. So for the moment I'll refrain
from backpatching further.
|
| |
|
|
|
|
|
|
|
|
|
| |
mergejoin to shield it from doing mark/restore and refetches. Put an explicit
flag in MergePath so we can centralize the logic that knows about this,
and add costing logic that considers using Materialize even when it's not
forced by the previously-existing considerations. This is in response to
a discussion back in August that suggested that materializing an inner
indexscan can be helpful when the refetch percentage is high enough.
|
|
|
|
| |
patch.
|
|
|
|
|
|
|
| |
but the transformed ArrayExpr claimed to have a return type of "domain",
even though the domain constraint was only checked by the enclosing
CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with
the base type of the domain. Per gripe by Tom Lane.
|
|
|
|
|
|
|
|
|
| |
we need to check domain constraints. We used to do it correctly, but 8.4
introduced a separate code path for the "ARRAY[]::arraytype" case to infer
the type of an empty ARRAY construct from the cast target, and forgot to take
domains into account.
Per report from Florian G. Pflug.
|
|
|
|
|
|
|
|
| |
User-defined consistent functions believes the check array
contains at least one true element which was not a true for
scanning pending list.
Per report from Yury Don <yura@vpcit.ru>
|
|
|
|
|
| |
if the initial value of a string variable was NULL, which is entirely
possible. Noted while experimenting with custom_variable_classes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Per discussion, this should result in defaulting to SQL_ASCII encoding.
The original coding could not support that because it conflated selection
of SQL_ASCII encoding with not being able to determine the encoding.
Adjust pg_get_encoding_from_locale()'s API to distinguish these cases,
and fix callers appropriately. Only initdb actually changes behavior,
since the other callers were perfectly content to consider these cases
equivalent.
Per bug #5178 from Boh Yap. Not going to bother back-patching, since
no one has complained before and there's an easy workaround (namely,
specify the encoding you want).
|
| |
|
|
|
|
|
|
|
| |
directly. This was a lot of trouble, but should be worth it in terms of
not having to keep the plpgsql lexer in step with core anymore. In addition
the handling of keywords is significantly better-structured, allowing us to
de-reserve a number of words that plpgsql formerly treated as reserved.
|
|
|
|
|
|
|
| |
This is a preparatory patch for allowing a dynamic cursor name be used in the
ECPG grammar.
Author: Zoltan Boszormenyi
|
|
|
|
|
|
|
|
|
|
| |
The main motivation for this is that it's required for Informix compatibility
in ECPG.
This patch makes the ECPG and core grammars a bit closer to one another for
these productions.
Author: Zoltan Boszormenyi
|
|
|
|
|
|
|
|
| |
Apple has fixed that bug in 10.6.2, and we should encourage users to
update to that version rather than trusting this cosmetic patch.
As was recently noted by Stephen Tyler, this patch was only masking
the problem in the context of DROP TABLESPACE, but the failure could
occur in other places such as pg_xlog cleanup.
|
|
|
|
|
|
|
|
| |
Add C comment about why there is no interval_abs(): it is unclear what
value to return:
http://archives.postgresql.org/pgsql-general/2009-10/msg01031.php
http://archives.postgresql.org/pgsql-general/2009-11/msg00041.php
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In VACUUM FULL, an interrupt after the initial transaction has been recorded
as committed can cause postmaster to restart with the following error message:
PANIC: cannot abort transaction NNNN, it was already committed
This problem has been reported many times.
In lazy VACUUM, an interrupt after the table has been truncated by
lazy_truncate_heap causes other backends' relcache to still point to the
removed pages; this can cause future INSERT and UPDATE queries to error out
with the following error message:
could not read block XX of relation 1663/NNN/MMMM: read only 0 of 8192 bytes
The window to this race condition is extremely narrow, but it has been seen in
the wild involving a cancelled autovacuum process.
The solution for both problems is to inhibit interrupts in both operations
until after the respective transactions have been committed. It's not a
complete solution, because the transaction could theoretically be aborted by
some other error, but at least fixes the most common causes of both problems.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of different parsers having different YYSTYPE unions that they want to use
with it. I defined a new union core_YYSTYPE that is just the (very short)
list of semantic values returned by the core scanner. I had originally
worried that this would require an extra interface layer, but actually we can
have parser.c's base_yylex (formerly filtered_base_yylex) take care of that at
no extra cost. Names associated with the core scanner are now "core_yy_foo",
with "base_yy_foo" being used in the core Bison parser and the parser.c
interface layer.
This solves the last serious stumbling block to eliminating plpgsql's separate
lexer. One restriction that will still be present is that plpgsql and the
core will have to agree on the token numbers assigned to tokens that can be
returned by the core lexer. Since Bison doesn't seem willing to accept
external assignments of those numbers, we'll have to live with decreeing that
core and plpgsql grammars declare these tokens first and in the same order.
|
|
|
|
|
|
|
| |
can be the name of a plpgsql cursor variable, which formerly was converted
to $N before the core parser saw it, but that's no longer the case.
Deal with plain name references to plpgsql variables, and add a regression
test case that exposes the failure.
|
|
|
|
| |
DO blocks for languages that have both trusted and untrusted variants.
|
|
|
|
|
|
|
|
| |
it works just as well to have them be ordinary identifiers, and this gets rid
of a number of ugly special cases. Plus we aren't interfering with non-rule
usage of these names.
catversion bump because the names change internally in stored rules.
|
|
|
|
| |
Pointed out by Debian's lintian.
|
|
|
|
|
|
| |
that are not handled by find_coercion_pathway, notably composite->RECORD.
Now that 8.4 supports composites as primary keys, it's worth dealing with
this case.
|
|
|
|
|
|
|
| |
tarballs under 100 characters. This should avoid failures with certain
untarring tools (WinZip and Midnight Commander have been mentioned as
likely suspects). Per my proposal of yesterday.
catversion bumped since the initial contents of pg_proc change.
|
|
|
|
|
|
| |
at the first keyword of the expression, rather than drawing a rather
artificial distinction between the ESCAPE subclause and the rest.
Per gripe from Gokulakannan Somasundaram and subsequent discusssion.
|
|
|
|
|
|
|
|
|
|
|
|
| |
As proof of concept, modify plpgsql to use the hooks. plpgsql is still
inserting $n symbols textually, but the "back end" of the parsing process now
goes through the ParamRef hook instead of using a fixed parameter-type array,
and then execution only fetches actually-referenced parameters, using a hook
added to ParamListInfo.
Although there's a lot left to be done in plpgsql, this already cures the
"if (TG_OP = 'INSERT' and NEW.foo ...)" problem, as illustrated by the
changed regression test.
|
|
|
|
| |
Itagaki Takahiro, with small changes by me and Simon.
|
|
|
|
|
|
| |
that it can scribble on scan->xs_ctup.t_self while following HOT chains,
so we can't rely on that to stay valid between hashgettuple() calls.
Introduce a private variable in HashScanOpaque, instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
hash indexes keep entries sorted by hash value. First, the original plans for
concurrency assumed that insertions would happen only at the end of a page,
which is no longer true; this could cause scans to transiently fail to find
index entries in the presence of concurrent insertions. We can compensate
by teaching scans to re-find their position after re-acquiring read locks.
Second, neither the bucket split nor the bucket compaction logic had been
fixed to preserve hashvalue ordering, so application of either of those
processes could lead to permanent corruption of an index, in the sense
that searches might fail to find entries that are present.
This patch fixes the split and compaction logic to preserve hashvalue
ordering, but it cannot do anything about pre-existing corruption. We will
need to recommend reindexing all hash indexes in the 8.4.2 release notes.
To buy back the performance loss hereby induced in split and compaction,
fix them to use PageIndexMultiDelete instead of retail PageIndexDelete
operations. We might later want to do something with qsort'ing the
page contents rather than doing a binary search for each insertion,
but that seemed more invasive than I cared to risk in a back-patch.
Per bug #5157 from Jeff Janes and subsequent investigation.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
recent proposal. As proof of concept, remove knowledge of Params from the
core parser, arranging for them to be handled entirely by parser hook
functions. It turns out we need an additional hook for that --- I had
forgotten about the code that handles inferring a parameter's type from
context.
This is a preliminary step towards letting plpgsql handle its variables
through parser hooks. Additional work remains to be done to expose the
facility through SPI, but I think this is all the changes needed in the core
parser.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding ensured nbuckets and nbatch didn't exceed INT_MAX,
which while not insane on its own terms did nothing to protect subsequent
code like "palloc(nbatch * sizeof(BufFile *))". Since enormous join size
estimates might well be planner error rather than reality, it seems best
to constrain the initial sizes to be not more than work_mem/sizeof(pointer),
thus ensuring the allocated arrays don't exceed work_mem. We will allow
nbatch to get bigger than that during subsequent ExecHashIncreaseNumBatches
calls, but we should still guard against integer overflow in those palloc
requests. Per bug #5145 from Bernt Marius Johnsen.
Although the given test case only seems to fail back to 8.2, previous
releases have variants of this issue, so patch all supported branches.
|
|
|
|
|
|
| |
adding the ModifyTable node type --- I had been thinking ModifyTable should
replace Append as a special case in push_plan(), but actually both of them
have to be special-cased.
|
|
|
|
|
|
| |
when FOR UPDATE is propagated down into a sub-select expanded from a view.
Similar bug to parser's isLockedRel issue that I fixed yesterday; likewise
seems not quite worth the effort to back-patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
underneath the Limit node, not atop it. This fixes the old problem that such
a query might unexpectedly return fewer rows than the LIMIT says, due to
LockRows discarding updated rows.
There is a related problem that LockRows might destroy the sort ordering
produced by earlier steps; but fixing that by pushing LockRows below Sort
would create serious performance problems that are unjustified in many
real-world applications, as well as potential deadlock problems from locking
many more rows than expected. Instead, keep the present semantics of applying
FOR UPDATE after ORDER BY within a single query level; but allow the user to
specify the other way by writing FOR UPDATE in a sub-select. To make that
work, track whether FOR UPDATE appeared explicitly in sub-selects or got
pushed down from the parent, and don't flatten a sub-select that contained an
explicit FOR UPDATE.
|
|
|
|
|
|
|
|
|
| |
that it's called within an AfterTriggerBeginQuery/AfterTriggerEndQuery pair.
The RI cascade triggers suppress that overhead on the assumption that they
are always run non-deferred, so it's possible to violate the condition if
someone mistakenly changes pg_trigger to mark such a trigger deferred.
We don't really care about supporting that, but throwing an error instead
of crashing seems desirable. Per report from Marcelo Costa.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for example in
WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE
the FOR UPDATE will now affect bar but not foo. This is more useful and
consistent than the original 8.4 behavior, which tried to propagate FOR UPDATE
into the WITH query but always failed due to assorted implementation
restrictions. Even though we are in process of removing those restrictions,
it seems correct on philosophical grounds to not let the outer query's
FOR UPDATE affect the WITH query.
In passing, fix isLockedRel which frequently got things wrong in
nested-subquery cases: "FOR UPDATE OF foo" applies to an alias foo in the
current query level, not subqueries. This has been broken for a long time,
but it doesn't seem worth back-patching further than 8.4 because the actual
consequences are minimal. At worst the parser would sometimes get
RowShareLock on a relation when it should be AccessShareLock or vice versa.
That would only make a difference if someone were using ExclusiveLock
concurrently, which no standard operation does, and anyway FOR UPDATE
doesn't result in visible changes so it's not clear that the someone would
notice any problem. Between that and the fact that FOR UPDATE barely works
with subqueries at all in existing releases, I'm not excited about worrying
about it.
|
|
|
|
| |
files in one run.
|
|
|
|
|
|
|
|
|
|
|
| |
those accepted by date_in(). I confused julian day numbers and number of
days since the postgres epoch 2000-01-01 in the original patch.
I just noticed that it's still easy to get such out-of-range values into
the database using to_date or +- operators, but this patch doesn't do
anything about those functions.
Per report from James Pye.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a lot of strange behaviors that occurred in join cases. We now identify the
"current" row for every joined relation in UPDATE, DELETE, and SELECT FOR
UPDATE/SHARE queries. If an EvalPlanQual recheck is necessary, we jam the
appropriate row into each scan node in the rechecking plan, forcing it to emit
only that one row. The former behavior could rescan the whole of each joined
relation for each recheck, which was terrible for performance, and what's much
worse could result in duplicated output tuples.
Also, the original implementation of EvalPlanQual could not re-use the recheck
execution tree --- it had to go through a full executor init and shutdown for
every row to be tested. To avoid this overhead, I've associated a special
runtime Param with each LockRows or ModifyTable plan node, and arranged to
make every scan node below such a node depend on that Param. Thus, by
signaling a change in that Param, the EPQ machinery can just rescan the
already-built test plan.
This patch also adds a prohibition on set-returning functions in the
targetlist of SELECT FOR UPDATE/SHARE. This is needed to avoid the
duplicate-output-tuple problem. It seems fairly reasonable since the
other restrictions on SELECT FOR UPDATE are meant to ensure that there
is a unique correspondence between source tuples and result tuples,
which an output SRF destroys as much as anything else does.
|
|
|
|
|
| |
child tables. This was found to be useless and confusing in virtually all
cases, and also contrary to the SQL standard.
|
|
|
|
|
|
|
|
|
| |
style by default. Per discussion, there seems to be hardly anything that
really relies on being able to change the regex flavor, so the ability to
select it via embedded options ought to be enough for any stragglers.
Also, if we didn't remove the GUC, we'd really be morally obligated to
mark the regex functions non-immutable, which'd possibly create performance
issues.
|
|
|
|
|
|
| |
Per recent discussion, add_missing_from has been deprecated for long enough to
consider removing, and it's getting in the way of planned parser refactoring.
The system now always behaves as though add_missing_from were OFF.
|