| Commit message (Collapse) | Author | Age |
|
|
|
| |
return "dynamic loading error";
|
|
|
|
|
|
|
|
|
| |
reasons I outlined in pghackers a few days ago.
Also, undo someone's overly optimistic decision to reduce tuple state
checks from if (...) elog() to Asserts. If I trusted this code more,
I might think it was a good idea to disable these checks in production
installations. But I don't.
|
|
|
|
|
|
| |
escapes --- they aren't simply quoted characters. Problem noted by
Antti Salmela. Also fix problem with incorrect handling of multibyte
characters when followed by a quantifier.
|
|
|
|
|
|
|
|
|
|
| |
In particular, there was a mathematical tie between the two possible
nestloop-with-materialized-inner-scan plans for a join (ie, we computed
the same cost with either input on the inside), resulting in a roundoff
error driven choice, if the relations were both small enough to fit in
sort_mem. Add a small cost factor to ensure we prefer materializing the
smaller input. This changes several regression test plans, but with any
luck we will now have more stability across platforms.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a relation's number of blocks, rather than the possibly-obsolete value
in pg_class.relpages. Scale the value in pg_class.reltuples correspondingly
to arrive at a hopefully more accurate number of rows. When pg_class
contains 0/0, estimate a tuple width from the column datatypes and divide
that into current file size to estimate number of rows. This improved
methodology allows us to jettison the ancient hacks that put bogus default
values into pg_class when a table is first created. Also, per a suggestion
from Simon, make VACUUM (but not VACUUM FULL or ANALYZE) adjust the value
it puts into pg_class.reltuples to try to represent the mean tuple density
instead of the minimal density that actually prevails just after VACUUM.
These changes alter the plans selected for certain regression tests, so
update the expected files accordingly. (I removed join_1.out because
it's not clear if it still applies; we can add back any variant versions
as they are shown to be needed.)
|
|
|
|
|
|
| |
prevents problems when the DECLARE is in a portal and is executed
repeatedly, as is possible in v3 protocol. Per analysis by Oliver
Jowett, though I didn't use his patch exactly.
|
|
|
|
|
|
| |
who use it scan the relevant source files for their own catalog. It
creates a bit of duplicate work for translators, but it gets the job done
for now.
|
|
|
|
|
|
| |
by Troels Arvin, Simon Riggs, Elein Mustain
Make spelling of SQL standard names uniform.
|
|
|
|
|
| |
8.4.1). This corrects some curious regex bugs, though not the greediness
issue I was hoping to find a solution for :-(
|
|
|
|
|
|
|
| |
error conditions during regexp compile, but not during regexp execution;
any sort of "can't happen" errors would be treated as no-match instead
of being reported as they should be. Noticed while trying to duplicate
a reported Tcl bug.
|
|
|
|
|
|
|
|
|
|
| |
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
|
|
|
|
|
| |
a home-brewed combination of assertions that boiled down to the same
thing.
|
|
|
|
|
|
|
|
| |
fill factor has been exceeded. We usually run with ffactor == 1, but
the way the test was coded, it wouldn't split a bucket until the actual
fill factor reached 2.0, because of use of integer division. Change
from > to >= so that it will split more aggressively when the table
starts to get full.
|
|
|
|
|
|
| |
few cycles during transaction exit. A typical session probably
wouldn't have as many as half a dozen portals open at once, so the
original value of 64 seems far larger than needed.
|
|
|
|
|
| |
the year from a BC date, but failed to make the same fix in
date_part(timestamptz).
|
|
|
|
|
| |
is nothing to do, which is most of the time. This is another simple
improvement to cut subtransaction entry/exit overhead.
|
|
|
|
|
|
|
| |
no need for it to be nearly as big as the global hash table, and since
it's not in shared memory it can grow if it does need to be bigger.
By reducing the size, we speed up hash_seq_search(), which saves a
significant fraction of subtransaction entry/exit overhead.
|
|
|
|
|
|
|
| |
to the original List; per report from Sebastian BÎck. I think this is
the last such bug --- I examined every lcons() call in the backend and
the rest seem OK --- but it's nervous-making that we're still finding
'em so many months after the List rewrite went in.
|
| |
|
|
|
|
|
|
| |
collector until the transaction commits. Per recent discussion, this
should avoid confusing autovacuum when an updating transaction runs for
a long time.
|
|
|
|
|
| |
free operations in client_cert_cb --- openssl will also attempt to free
these structures, resulting in core dumps.
|
| |
|
|
|
|
|
|
|
|
| |
this is to avoid scenarios where incoming backends find no live copies
of a database's row because the only live copy is in an as-yet-unwritten
shared buffer, which they can't see. Also, use FlushRelationBuffers()
for forcing out pg_database, instead of the much more expensive BufferSync().
There's no need to write out pages belonging to other relations.
|
|
|
|
| |
avoid repalloc'ing twice when once is sufficient.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Rather than using ReadBuffer() to increment the reference count on an
already-pinned buffer, we should use IncrBufferRefCount() as it is
faster and does not require acquiring the BufMgrLock.
|
|
|
|
| |
been defined. Patch from Gavin Sherry, editorializing by Neil Conway.
|
|
|
|
|
|
| |
even uglier than it was already :-(. Also, on Windows only, use temporary
shared memory segments instead of ordinary files to pass over critical
variable values from postmaster to child processes. Magnus Hagander
|
|
|
|
|
|
|
|
| |
more than 65K columns, or when the created table has more than 65K columns
due to adding inherited columns from parent relations. Fix a similar
crash when processing SELECT queries with more than 65K target list
entries. In all three cases we would eventually detect the error and
elog, but the check was being made too late.
|
| |
|
|
|
|
| |
Magnus Hagander
|
|
|
|
|
|
|
|
|
| |
We don't really want to start a new SPI connection, just keep using the old
one; otherwise we have memory management problems as illustrated by
John Kennedy's bug report of today. This requires a bit of a hack to
ensure the SPI stack state is properly restored, but then again what we
were doing before was a hack too, strictly speaking. Add a regression
test to cover this case.
|
|
|
|
|
|
|
| |
plain SUSET instead. Also delay processing of options received in
client connection request until after we know if the user is a superuser,
so that SUSET values can be set that way by legitimate superusers.
Per recent discussion.
|
|
|
|
|
|
| |
buffer is valid, as ReadBuffer() will elog on error. Most of the call
sites of ReadBuffer() got this right, but this patch fixes those call
sites that did not.
|
|
|
|
|
|
|
| |
> pg specific, like "PostgreSQL.1". I have not done this since a new compile
> would not detect a running old beta. But now would be the time (or never).
Zeugswetter Andreas
|
|
|
|
| |
since this path will lead to postmaster exit anyway...)
|
| |
|
|
|
|
| |
a global variable to control building indexes.
|
|
|
|
| |
selectivity estimates, per recent discussion.
|
|
|
|
|
|
|
|
|
| |
shared memory segment ID. If we can't access the existing shmem segment,
it must not be relevant to our data directory. If we can access it,
then attach to it and check for an actual match to the data directory.
This should avoid some cases of failure-to-restart-after-boot without
introducing any significant risk of failing to detect a still-running
old backend.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
estimates when combining the estimates for a range query. As pointed out
by Miquel van Smoorenburg, the existing check for an impossible combined
result would quite possibly fail to detect one default and one non-default
input. It seems better to use the default range query estimate in such
cases. To do so, add a check for an estimate of exactly DEFAULT_INEQ_SEL.
This is a bit ugly because it introduces additional coupling between
clauselist_selectivity and scalarltsel/scalargtsel, but it's not like
there wasn't plenty already...
|
|
|
|
|
|
|
|
|
|
| |
working as intended --- for some reason, FROM a.b.c was getting
parsed as if it were a function name and not a qualified name.
I think there must be a bug in bison, because it should have
complained that the grammar was ambiguous. Anyway, fix it along
the same lines previously used for func_name vs columnref, and get
rid of the right-recursion in attrs that seems to have confused
bison.
|
|
|
|
|
| |
shifting left by full word width gives zero. Per bug report from
Tyson Thomson.
|