| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
pg_migrator actually needs and not just a partial solution. We have to be
able to specify the OID that the new toast table should be created with.
|
|
|
|
| |
provided by Andrew.
|
|
|
|
| |
Author: Itagaki Takahiro <itagaki.takahiro@oss.ntt.co.jp>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
will throw an error, rather than possibly allowing someone to synthesize
a manual call to an internal-accepting function. As of CVS HEAD and existing
releases, all such functions are either STRICT or careful about null inputs,
so there is no current security issue here. But it seems like a good idea
to lock this down to protect against future mistakes.
In passing, similarly lock down trigger_in, language_handler_in, opaque_in,
and shell_in. These are not believed to present any security risk, but
there's still no good reason to allow nulls of these types to be created.
I left the polymorphic pseudotypes (anyelement etc) alone, since a null
of one of those types doesn't seem to be a problem --- the worst you can
say about it is that it doesn't have an underlying non-polymorphic type.
If we were to make this change during normal development, we'd just
automatically bump catversion for a pg_proc.h change. But since this doesn't
create a compatibility risk and isn't believed to be fixing a live bug, it
seems better not to force a catversion bump in late beta.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
behavior in cases where we don't know the heap tuple count accurately; in
particular partial vacuum, but this also makes the API a bit more useful
for ANALYZE. This patch adds "estimated_count" flags to both structs so
that an approximate count can be flagged as such, and adjusts the logic
so that approximate counts are not used for updating pg_class.reltuples.
This fixes my previous complaint that VACUUM was putting ridiculous values
into pg_class.reltuples for indexes. The actual impact of that bug is
limited, because the planner only pays attention to reltuples for an index
if the index is partial; which probably explains why beta testers hadn't
noticed a degradation in plan quality from it. But it needs to be fixed.
The whole thing is a bit messy and should be redesigned in future, because
reltuples now has the potential to drift quite far away from reality when
a long period elapses with no non-partial vacuums. But this is as good as
it's going to get for 8.4.
|
|
|
|
|
|
|
|
|
| |
is supposed to remove duplicate heap TIDs, we have to be sure to reduce the
tuple size and posting-item count accordingly in addItemPointersToTuple().
Failing to do so resulted in the effective injection of garbage TIDs into the
index contents, ie, whatever happened to be in the memory palloc'd for the
new tuple. I'm not sure that this fully explains the index corruption
reported by Tatsuo Ishii, but the test case I'm using no longer fails.
|
|
|
|
|
|
|
|
|
|
|
|
| |
should use GinItemPointerGetBlockNumber/GinItemPointerGetOffsetNumber,
not ItemPointerGetBlockNumber/ItemPointerGetOffsetNumber, because the latter
will Assert() on ip_posid == 0, ie a "Min" pointer. (Thus, ItemPointerIsMin
has never worked at all, but it seems unused at present.) I'm not certain
that the case can occur in normal functioning, but it's blowing up on me
while investigating Tatsuo-san's data corruption problem. In any case it
seems like a problem waiting to bite someone.
Back-patch just in case this really is a problem for somebody in the field.
|
|
|
|
|
|
|
|
|
|
| |
by extending the ereport() API to cater for pluralization directly. This
is better than the original method of calling ngettext outside the elog.c
code because (1) it avoids double translation, which wastes cycles and in
the worst case could give a wrong result; and (2) it avoids having to use
a different coding method in PL code than in the core backend. The
client-side uses of ngettext are not touched since neither of these concerns
is very pressing in the client environment. Per my proposal of yesterday.
|
|
|
|
|
|
|
|
|
|
|
| |
YEAR, DECADE, CENTURY, or MILLENIUM fields, just as it always has done for
other types of fields. The previous behavior seems to have been a hack to
avoid defining bit-positions for all these field types in DTK_M() masks,
rather than something that was really considered to be desired behavior.
But there is room in the masks for these, and we really need to tighten up
at least the behavior of DAY and YEAR fields to avoid unexpected behavior
associated with the 8.4 changes to interpret ambiguous fields based on the
interval qualifier (typmod) value. Per my example and proposed patch.
|
|
|
|
|
| |
EncodeTimeOnly, EncodeDateTime, EncodeInterval. These don't have any good
reason to fail, and their callers were mostly not checking anyway.
|
|
|
|
| |
relopt_kind value in add_reloption_kind(). Per Zdenek Kotala.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
redirecting libxml's allocations into a Postgres context. Instead, just let
it use malloc directly, and add PG_TRY blocks as needed to be sure we release
libxml data structures in error recovery code paths. This is ugly but seems
much more likely to play nicely with third-party uses of libxml, as seen in
recent trouble reports about using Perl XML facilities in pl/perl and bug
#4774 about contrib/xml2.
I left the code for allocation redirection in place, but it's only
built/used if you #define USE_LIBXMLCONTEXT. This is because I found it
useful to corral libxml's allocations in a palloc context when hunting
for libxml memory leaks, and we're surely going to have more of those
in the future with this type of approach. But we don't want it turned on
in a normal build because it breaks exactly what we need to fix.
I have not re-indented most of the code sections that are now wrapped
by PG_TRY(); that's for ease of review. pg_indent will fix it.
This is a pre-existing bug in 8.3, but I don't dare back-patch this change
until it's gotten a reasonable amount of field testing.
|
|
|
|
|
|
|
|
|
| |
errors when tables are concurrently dropped. To do this we must take lock
on each relation before we check its privileges. The old code was trying
to do that the other way around, which is a bit pointless when there are lots
of other commands that lock relations before checking privileges. I did keep
it checking each relation's privilege before locking the next relation, which
is a detail that ALTER TABLE isn't too picky about.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ability to lock relations as they scan pg_inherits, and to ignore any
relations that have disappeared by the time we get lock on them. This
makes uses of these functions safe against concurrent DROP operations
on child tables: we will effectively ignore any just-dropped child,
rather than possibly throwing an error as in recent bug report from
Thomas Johansson (and similar past complaints). The behavior should
not change otherwise, since the code was acquiring those same locks
anyway, just a little bit later.
An exception is LockTableCommand(), which is still behaving unsafely;
but that seems to require some more discussion before we change it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
find_inheritance_children() and find_all_inheritors(). I got annoyed that
these are buried inside the planner but mostly used elsewhere. So, create
a new file catalog/pg_inherits.c and put them there, along with a couple
of other functions that search pg_inherits.
The code that modifies pg_inherits is (still) in tablecmds.c --- it's
kind of entangled with unrelated code that modifies pg_depend and other
stuff, so pulling it out seemed like a bigger change than I wanted to make
right now. But this file provides a natural home for it if anyone ever
gets around to that.
This commit just moves code around; it doesn't change anything, except
I succumbed to the temptation to make a couple of trivial optimizations
in typeInheritsFrom().
|
|
|
|
|
|
| |
joins a bit better, ie, understand the differing cost functions for matched
and unmatched outer tuples. There is more that could be done in cost_hashjoin
but this already helps a great deal. Per discussions with Robert Haas.
|
|
|
|
|
|
| |
linkage on Win32.
Tested by Hiroshi Saito
|
|
|
|
|
|
|
|
| |
a toast table to be built, even if the sum-of-column-widths calculation
indicates one isn't needed. This is needed by pg_migrator because if the
old table has a toast table, we have to migrate over the toast table since
it might contain some live data, even though subsequent column drops could
mean that no recently-added rows could require toasting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a backend has done exit(0) or exit(1) without having disengaged itself
from shared memory. We are at risk for this whenever third-party code is
loaded into a backend, since such code might not know it's supposed to go
through proc_exit() instead. Also, it is reported that under Windows
there are ways to externally kill a process that cause the status code
returned to the postmaster to be indistinguishable from a voluntary exit
(thank you, Microsoft). If this does happen then the system is probably
hosed --- for instance, the dead session might still be holding locks.
So the best recovery method is to treat this like a backend crash.
The dead man switch is armed for a particular child process when it
acquires a regular PGPROC, and disarmed when the PGPROC is released;
these should be the first and last touches of shared memory resources
in a backend, or close enough anyway. This choice means there is no
coverage for auxiliary processes, but I doubt we need that, since they
shouldn't be executing any user-provided code anyway.
This patch also improves the management of the EXEC_BACKEND
ShmemBackendArray array a bit, by reducing search costs.
Although this problem is of long standing, the lack of field complaints
seems to mean it's not critical enough to risk back-patching; at least
not till we get some more testing of this mechanism.
|
|
|
|
|
|
|
| |
PlaceHolderVar nodes in join quals appearing in or below the lowest
outer join that could null the subquery being pulled up. This improves
the planner's ability to recognize constant join quals, and probably
helps with detection of common sort keys (equivalence classes) as well.
|
|
|
|
|
|
| |
update.
Per discussion.
|
| |
|
|
|
|
|
|
|
|
|
| |
fact that this is breaking the MSVC build, it's probably not really a good
idea to expand the dependencies of gram.h any further than the core parser;
for instance the value of SCONST might depend on which bison version you'd
built with. Better to expose an additional call point in parser.c, so
move what I had put into pl_funcs.c into parser.c. Also PGDLLIMPORT'ify
the reference to standard_conforming_strings, per buildfarm results.
|
|
|
|
|
|
|
|
|
| |
Stefan Kaltenbrunner. The most reasonable behavior (at least for the near
term) seems to be to ignore the PlaceHolderVar and examine its argument
instead. In support of this, change the API of pull_var_clause() to allow
callers to request recursion into PlaceHolderVars. Currently
estimate_num_groups() is the only customer for that behavior, but where
there's one there may be others.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
constants through full joins, as in
select * from tenk1 a full join tenk1 b using (unique1)
where unique1 = 42;
which should generate a fairly cheap plan where we apply the constraint
unique1 = 42 in each relation scan. This had been broken by my patch of
2008-06-27, which is now reverted in favor of a more invasive but hopefully
less incorrect approach. That patch was meant to prevent incorrect extraction
of OR'd indexclauses from OR conditions above an outer join. To do that
correctly we need more information than the outerjoin_delay flag can provide,
so add a nullable_relids field to RestrictInfo that records exactly which
relations are nulled by outer joins that are underneath a particular qual
clause. A side benefit is that we can make the test in create_or_index_quals
more specific: it is now smart enough to extract an OR'd indexclause into the
outer side of an outer join, even though it must not do so in the inner side.
The old coding couldn't distinguish these cases so it could not do either.
|
| |
|
|
|
|
|
|
|
| |
how this ought to behave for multi-dimensional arrays. Per discussion,
not having it at all seems better than having it with what might prove
to be the wrong behavior. We can always add it later when we have consensus
on the correct behavior.
|
|
|
|
|
| |
map_sql_value_to_xml_value() instead of directly through the data type output
function. This is per SQL standard, and consistent with XMLELEMENT().
|
|
|
|
|
|
|
|
|
| |
already did that on Windows, but it's needed on other platforms too when
LC_CTYPE=C. With other locales, we enforce (or trust) that the codeset of
the locale matches the server encoding so we don't need to bind it
explicitly. It should do no harm in that case either, but I don't have
full faith in the PG encoding -> OS codeset mapping table yet. Per recent
discussion on pgsql-hackers.
|
|
|
|
|
|
| |
the checkpoint in immediate or lazy mode. This is to address complaints
that pg_start_backup() takes a long time even when there's no need to minimize
its I/O consumption.
|
|
|
|
| |
LC_COLLATE and LC_CTYPE, per discussion on pgsql-hackers.
|
|
|
|
|
|
|
| |
alias for array_length(v,1). The efficiency gain here is doubtless
negligible --- what I'm interested in is making sure that if we have
second thoughts about the definition, we will not have to force a
post-beta initdb to change the implementation.
|
|
|
|
|
|
|
|
|
|
|
| |
are individually labeled, rather than just grouped under an "InitPlan"
or "SubPlan" heading. This in turn makes it possible for decompilation of
a subplan reference to usefully identify which subplan it's referencing.
I also made InitPlans identify which parameter symbol(s) they compute,
so that references to those parameters elsewhere in the plan tree can
be connected to the initplan that will be executed. Per a gripe from
Robert Haas about EXPLAIN output of a WITH query being inadequate,
plus some longstanding pet peeves of my own.
|
|
|
|
|
|
| |
are using our own ports of getopt or getopt_long, those will define
the variable for themselves; and if not, we don't need these, because
we never touch the variable anyway.
|
|
|
|
|
|
| |
provides optreset. Current mastodon results prove that in fact it
does not; it was only because getopt.c defined the variable anyway
that things failed to fall over.
|
|
|
|
|
| |
probe for opterr (exactly like the one for optreset) and have getopt.c
define the variables only if configure doesn't find them in libc.
|
|
|
|
|
|
| |
of adding optional namespace and action fields to DefElem. Having three
node types that do essentially the same thing bloats the code and leads
to errors of confusion, such as in yesterday's bug report from Khee Chin.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when we are waiting for old snapshots to go away during a concurrent index
build. In particular, this rule lets us avoid waiting for
idle-in-transaction sessions.
This logic could be improved further if we had some way to wake up when
the session we are currently waiting for goes idle-in-transaction. However
that would be a significantly more complex/invasive patch, so it'll have to
wait for some other day.
Simon Riggs, with some improvements by Tom.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To implement this without almost duplicating the reloption table, treat
relopt_kind as a bitmask instead of an integer value. This decreases the
range of allowed values, but it's not clear that there's need for that much
values anyway.
This patch also makes heap_reloptions explicitly a no-op for relation kinds
other than heap and TOAST tables.
Patch by ITAGAKI Takahiro with minor edits from me. (In particular I removed
the bit about adding relation kind to an error message, which I intend to
commit separately.)
|
|
|
|
|
|
|
|
|
| |
for simple Var targetlist entries all the time, even when there are other
entries that are not simple Vars. Also, ensure that we prefetch attributes
(with slot_getsomeattrs) for all Vars in the targetlist, even those buried
within expressions. In combination these changes seem to significantly
reduce the runtime for cases where tlists are mostly but not exclusively
Vars. Per my proposal of yesterday.
|
|
|
|
|
|
|
|
|
|
|
|
| |
conversion functions. This allows transaction rollback to revert to a
previous client_encoding setting without doing fresh catalog lookups.
I believe that this explains and fixes the recent report of "failed to commit
client_encoding" failures.
This bug is present in 8.3.x, but it doesn't seem prudent to back-patch
the fix, at least not till it's had some time for field testing in HEAD.
In passing, remove SetDefaultClientEncoding(), which was used nowhere.
|
|
|
|
|
|
|
|
|
|
| |
temp relations; this is no more expensive than before, now that we have
pg_class.relistemp. Insert tests into bufmgr.c to prevent attempting
to fetch pages from nonlocal temp relations. This provides a low-level
defense against bugs-of-omission allowing temp pages to be loaded into shared
buffers, as in the contrib/pgstattuple problem reported by Stuart Bishop.
While at it, tweak a bunch of places to use new relcache tests (instead of
expensive probes into pg_namespace) to detect local or nonlocal temp tables.
|
|
|
|
|
|
|
| |
relations (including a temp table's indexes and toast table/index), and
false for normal relations. For ease of checking, this commit just adds
the column and fills it correctly --- revising the relation access machinery
to use it will come separately.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TupleTableSlots. We have functions for retrieving a minimal tuple from a slot
after storing a regular tuple in it, or vice versa; but these were implemented
by converting the internal storage from one format to the other. The problem
with that is it invalidates any pass-by-reference Datums that were already
fetched from the slot, since they'll be pointing into the just-freed version
of the tuple. The known problem cases involve fetching both a whole-row
variable and a pass-by-reference value from a slot that is fed from a
tuplestore or tuplesort object. The added regression tests illustrate some
simple cases, but there may be other failure scenarios traceable to the same
bug. Note that the added tests probably only fail on unpatched code if it's
built with --enable-cassert; otherwise the bug leads to fetching from freed
memory, which will not have been overwritten without additional conditions.
Fix by allowing a slot to contain both formats simultaneously; which turns out
not to complicate the logic much at all, if anything it seems less contorted
than before.
Back-patch to 8.2, where minimal tuples were introduced.
|
|
|
|
|
|
|
|
|
| |
mode while callers hold pointers to in-memory tuples. I reported this for
the case of nodeWindowAgg's primary scan tuple, but inspection of the code
shows that all of the calls in nodeWindowAgg and nodeCtescan are at risk.
For the moment, fix it with a rather brute-force approach of copying
whenever one of the at-risk callers requests a tuple. Later we might
think of some sort of reference-count approach to reduce tuple copying.
|
|
|
|
|
|
| |
In the backend, I changed only a handful of exemplary or important-looking
instances to make use of the plural support; there is probably more work
there. For the rest of the source, this should cover all relevant cases.
|
|
|
|
|
|
|
|
| |
"physical tlist" optimization on the outer relation (ie, force a projection
step to occur in its scan). This avoids storing useless column values when
the outer relation's tuples are written to temporary batch files.
Modified version of a patch by Michael Henderson and Ramon Lawrence.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
method to pass extra data to the consistent() and comparePartial() methods.
This is the core infrastructure needed to support the soon-to-appear
contrib/btree_gin module. The APIs are still upward compatible with the
definitions used in 8.3 and before, although *not* with the previous 8.4devel
function definitions.
catversion bump for changes in pg_proc entries (although these are just
cosmetic, since GIN doesn't actually look at the function signature before
calling it...)
Teodor Sigaev and Oleg Bartunov
|