| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
| |
These were missed in commit a213f1ee6c5a1bbe1f074ca201975e76ad2ed50c,
which removed that function.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
SP-GiST is comparable to GiST in flexibility, but supports non-balanced
partitioned search structures rather than balanced trees. As described at
PGCon 2011, this new indexing structure can beat GiST in both index build
time and query speed for search problems that it is well matched to.
There are a number of areas that could still use improvement, but at this
point the code seems committable.
Teodor Sigaev and Oleg Bartunov, with considerable revisions by Tom Lane
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Heikki Linnakangas had the idea of rearranging GetSnapshotData to
avoid checking for sub-XIDs when no top-level XID is present. This
patch does that plus further a bit of further, related rearrangement.
Benchmarking show a significant improvement on unlogged tables at
higher concurrency levels, and mostly indifferent result on permanent
tables (which are presumably bottlenecked elsewhere). Most of the
benefit seems to come from using the new NormalTransactionIdPrecedes()
macro rather than the function call TransactionIdPrecedes().
|
|
|
|
|
|
|
|
|
| |
Valid values are --pre-data, data and post-data. The option can be
given more than once. --schema-only is equivalent to
--section=pre-data --section=post-data. --data-only is equivalent
to --section=data.
Andrew Dunstan, reviewed by Joachim Wieland and Josh Berkus.
|
|
|
|
|
|
|
|
| |
This works the same as include, except that an error is not thrown
if the file is missing. Instead the fact that it's missing is
logged.
Greg Smith, reviewed by Euler Taveira de Oliveira.
|
|
|
|
|
|
|
|
| |
If the referrent of a name changes while we're waiting for the lock,
we must recheck permissons. We also now check the relkind before
locking, since it's easy to do that long the way.
Patch by me; review by Noah Misch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, renaming a table, sequence, view, index, foreign table,
column, or trigger checked permissions before locking the object, which
meant that if permissions were revoked during the lock wait, we would
still allow the operation. Similarly, if the original object is dropped
and a new one with the same name is created, the operation will be allowed
if we had permissions on the old object; the permissions on the new
object don't matter. All this is now fixed.
Along the way, attempting to rename a trigger on a foreign table now gives
the same error message as trying to create one there in the first place
(i.e. that it's not a table or view) rather than simply stating that no
trigger by that name exists.
Patch by me; review by Noah Misch.
|
|
|
|
|
|
| |
Fixes an oversight in commit fc6d1006bda783cc002c61a5f072905849dbde4b.
Noted by Tom Lane.
|
| |
|
|
|
|
|
|
|
| |
Lots of repetitive code was moved into new functions
PLy_spi_subtransaction_{begin,commit,abort}.
Jan Urbański
|
|
|
|
|
|
|
| |
Andrew Dunstan, reviewed by Josh Berkus, Robert Haas and Peter Geoghegan.
This allows dumping of a table definition but not its data, on a per table basis.
Table name patterns are supported just as for --exclude-table.
|
|
|
|
| |
Yeb Havinga, reviewed by Kevin Grittner, with small changes by me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removing this bit from xl_info allows us to restore the old limit of four
(not three) separate pages touched by a WAL record, which is needed for the
upcoming SP-GiST feature, and will likely be useful elsewhere in future.
When we implemented XLR_BKP_REMOVABLE in 2007, we had to do it like that
because no special WAL-visible action was taken when starting a backup.
However, now we force a segment switch when starting a backup, so a
compressing WAL archiver (such as pglesslog) that uses the state shown in
the current page header will not be fooled as to removability of backup
blocks. The only downside is that the archiver will not return to
compressing mode for up to one WAL page after the backup is over, which is
a small price to pay for getting back the extra xl_info bit. In any case
the archiver could look for XLOG_BACKUP_END records if it thought it was
worth the trouble to do so.
Bump XLOG_PAGE_MAGIC since this is effectively a change in WAL format.
|
|
|
|
|
|
|
|
|
|
|
| |
I forgot to change the functions to use the PG_GETARG_INET_PP() macro,
when I changed DatumGetInetP() to unpack the datum, like Datum*P macros
usually do. Also, I screwed up the definition of the PG_GETARG_INET_PP()
macro, and didn't notice because it wasn't used.
This fixes the memory leak when sorting inet values, as reported
by Jochen Erwied and debugged by Andres Freund. Backpatch to 8.3, like
the previous patch that broke it.
|
|
|
|
|
| |
Remove some dead code, conditionally declare some items or call
some code, and fix one or two declarations.
|
| |
|
|
|
|
|
|
|
| |
Original patch by Lars Kanis, reviewed by Nishiyama Tomoaki and tweaked some by me.
This compiler, or at least the latest version of it, is currently broken, and
only passes the regression tests if built with -O0.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
we don't reach consistency before replaying all of the WAL. Rename the
variable to reachedConsistency, to make its intention clearer.
In master, that was an active bug because of the recent patch to
immediately PANIC if a reference to a missing page is found in WAL after
reaching consistency, as Tom Lane's test case demonstrated. In 9.1 and 9.0,
the only consequence was a misleading "consistent recovery state reached at
%X/%X" message in the log at the beginning of crash recovery (the database
is not consistent at that point yet). In 8.4, the log message was not
printed in crash recovery, even though there was a similar
reachedMinRecoveryPoint local variable that was also set early. So,
backpatch to 9.1 and 9.0.
|
|
|
|
|
|
|
| |
lost. The only way we detect that at the moment is when write() fails when
we try to write to the socket.
Florian Pflug with small changes by me, reviewed by Greg Jaskiewicz.
|
|
|
|
| |
Thomas Munro
|
|
|
|
|
|
| |
Make sure all calls are protected by HAVE_READLINK, and get the buffer
overflow tests right. Be a bit more paranoid about string length in
_tarWriteHeader(), too.
|
|
|
|
|
| |
This situation won't set errno, so using %m will give an incorrect
error message.
|
|
|
|
|
|
|
| |
We don't have any such platforms now, but might in the future.
Also, detect cases when a tablespace symlink points to a path that
is longer than we can handle, and give a warning.
|
|
|
|
|
|
|
|
| |
Instead, add a function pg_tablespace_location(oid) used to return
the same information, and do this by reading the symbolic link.
Doing it this way makes it possible to relocate a tablespace when the
database is down by simply changing the symbolic link.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch creates an API whereby a btree index opclass can optionally
provide non-SQL-callable support functions for sorting. In the initial
patch, we only use this to provide a directly-callable comparator function,
which can be invoked with a bit less overhead than the traditional
SQL-callable comparator. While that should be of value in itself, the real
reason for doing this is to provide a datatype-extensible framework for
more aggressive optimizations, as in Peter Geoghegan's recent work.
Robert Haas and Tom Lane
|
|
|
|
| |
Noted during post-commit review by by Noah Misch.
|
|
|
|
|
|
|
|
| |
If unable to connect to "postgres", try "template1". This allows things to
work more smoothly in the case where the postgres database has been
dropped. And just in case that's not good enough, also allow the user to
specify a maintenance database to be used for the initial connection, to
cover the case where neither postgres nor template1 is suitable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While logically correct, these two Asserts could fail depending on the
vagaries of floating-point arithmetic. In particular, on machines with
floating-point registers wider than standard "double" values, it was
possible for the compiler to compare a rounded-to-double value already
stored in memory with an unrounded long double value still in a register.
Given the preceding checks, these assertions aren't adding much, so let's
just get rid of them rather than try to find a compiler-proof fix.
Per report from Pavel Stehule.
Given the lack of previous complaints, and the fact that only developers
would be likely to trip over it, I'm only going to change this in HEAD,
even though the code has been like this for a long time.
|
|
|
|
|
|
|
| |
Add a function plpy.cursor that is similar to plpy.execute but uses an
SPI cursor to avoid fetching the entire result set into memory.
Jan Urbański, reviewed by Steve Singer
|
|
|
|
|
|
|
|
| |
This can be used to set (or unset) environment variables that will
affect programs called by psql (such as the PAGER), probably most
usefully in a .psqlrc file.
Andrew Dunstan, reviewed by Josh Kupershmidt.
|
|
|
|
| |
code.
|
|
|
|
|
|
|
|
| |
This makes it possible to use a libpq app with home directory set
to /dev/null, for example - treating it the same as if the file
doesn't exist (which it doesn't).
Per bug #6302, reported by Diego Elio Petteno
|
|
|
|
|
| |
This gives editors a better chance to treat these files as the SQL
files that they are.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
invalid-page hash table, PANIC immediately. Immediate PANIC is much better
than waiting for end-of-recovery, which is what we did before, because the
end-of-recovery might not come until months later if this is a standby
server.
Also refrain from creating a restartpoint if there are invalid-page entries
in the hash table. Restarting recovery from such a restartpoint would not
see the invalid references, and wouldn't be able to cross-check them when
consistency is reached. That wouldn't matter when things are going smoothly,
but the more sanity checks you have the better.
Fujii Masao
|
|
|
|
|
| |
Not everyone has /pg linked to the src subdirectory of their PostgreSQL
tree. Also, cc isn't the way to invoke the compiler everywhere.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since record[] uses array_in, it needs to have its element type passed
as typioparam. In HEAD and 9.1, this fix essentially reverts commit
9bc933b2125a5358722490acbc50889887bf7680, which was a hack that is no
longer needed since domains don't set their typelem anymore. Before
that, adjust the logic so that only domains are excluded from being
treated like arrays, rather than assuming that only base types should
be included. Add a regression test to demonstrate the need for this.
Per report from Maxim Boguk.
Back-patch to 8.4, where type record[] was added.
|
|
|
|
|
| |
DST law changes in Brazil, Cuba, Fiji, Palestine, Russia, Samoa.
Historical corrections for Alaska and British East Africa.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous coding, callers were faced with an awkward choice:
look up the name, do permissions checks, and then lock the table; or
look up the name, lock the table, and then do permissions checks.
The first choice was wrong because the results of the name lookup
and permissions checks might be out-of-date by the time the table
lock was acquired, while the second allowed a user with no privileges
to interfere with access to a table by users who do have privileges
(e.g. if a malicious backend queues up for an AccessExclusiveLock on
a table on which AccessShareLock is already held, further attempts
to access the table will be blocked until the AccessExclusiveLock
is obtained and the malicious backend's transaction rolls back).
To fix, allow callers of RangeVarGetRelid() to pass a callback which
gets executed after performing the name lookup but before acquiring
the relation lock. If the name lookup is retried (because
invalidation messages are received), the callback will be re-executed
as well, so we get the best of both worlds. RangeVarGetRelid() is
renamed to RangeVarGetRelidExtended(); callers not wishing to supply
a callback can continue to invoke it as RangeVarGetRelid(), which is
now a macro. Since the only one caller that uses nowait = true now
passes a callback anyway, the RangeVarGetRelid() macro defaults nowait
as well. The callback can also be used for supplemental locking - for
example, REINDEX INDEX needs to acquire the table lock before the index
lock to reduce deadlock possibilities.
There's a lot more work to be done here to fix all the cases where this
can be a problem, but this commit provides the general infrastructure
and fixes the following specific cases: REINDEX INDEX, REINDEX TABLE,
LOCK TABLE, and and DROP TABLE/INDEX/SEQUENCE/VIEW/FOREIGN TABLE.
Per discussion with Noah Misch and Alvaro Herrera.
|
|
|
|
|
|
| |
On a platform that isn't supplying __FILE__, previous coding would either
crash or give a stale result for the filename string. Not sure how likely
that is, but the original code catered for it, so let's keep doing so.
|
|
|
|
|
|
|
| |
In vpath builds, the __FILE__ macro that is used in verbose error
reports contains the full absolute file name, which makes the error
messages excessively verbose. So keep only the base name, thus
matching the behavior of non-vpath builds.
|
|
|
|
| |
Per buildfarm.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Force the transaction isolation level to READ COMMITTED in autovacuum
worker and launcher processes. There is no benefit to using a higher
isolation level, and doing so could result in delaying foreground
transactions (or maybe even causing unnecessary serialization failures?).
Noted by Dan Ports.
Also, make sure we disable zero_damaged_pages and statement_timeout in
the autovac launcher, not only workers. Now that the launcher can run
transactions, these settings could affect its behavior, and it seems
like the same arguments apply to the launcher as the workers.
|
|
|
|
|
|
| |
Fix entirely broken handling of va_list printing routines, update some
out-of-date comments, fix some bogus inclusion orders, fix NLS declarations,
fix missed realloc calls.
|
|
|
|
| |
Simple extension of previous patch for CHECK constraints.
|
|
|
|
| |
pg_dumpall to use the same memory allocation functions as the others.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should make it easier to identify which row is problematic when an
insert or update is processing many rows.
The formatting is similar to that for unique-index violation messages,
except that we limit field widths to 64 bytes since otherwise the message
could get unreasonably long. (In particular, there's currently no attempt
to quote or escape field values that contain commas etc.)
Jan Kundrát, reviewed by Royce Ausburn, somewhat rewritten by me.
|
| |
|