| Commit message (Collapse) | Author | Age |
|
|
|
| |
relevant location.
|
|
|
|
|
|
|
|
| |
palloc() will normally round allocation requests up to the next power of 2,
so make dynahash choose allocation sizes that are as close to a power of 2
as possible.
Back-patch to 8.1 --- the problem exists further back, but a much larger
patch would be needed and it doesn't seem worth taking any risks.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
changing semantics too much. statement_timestamp is now set immediately
upon receipt of a client command message, and the various places that used
to do their own gettimeofday() calls to mark command startup are referenced
to that instead. I have also made stats_command_string use that same
value for pg_stat_activity.query_start for both the command itself and
its eventual replacement by <IDLE> or <idle in transaction>. There was
some debate about that, but no argument that seemed convincing enough to
justify an extra gettimeofday() call.
|
|
|
|
|
|
|
|
|
| |
libpq/md5.h, so that there's a clear separation between backend-only
definitions and shared frontend/backend definitions. (Turns out this
is reversing a bad decision from some years ago...) Fix up references
to crypt.h as needed. I looked into moving the code into src/port, but
the headers in src/include/libpq are sufficiently intertwined that it
seems more work than it's worth to do that.
|
|
|
|
|
|
|
| |
current commands; instead, store current-status information in shared
memory. This substantially reduces the overhead of stats_command_string
and also ensures that pg_stat_activity is fully up to date at all times.
Per my recent proposal.
|
|
|
|
|
|
|
|
|
|
| |
by creating a reference-count mechanism, similar to what we did a long time
ago for catcache entries. The back branches have an ugly solution involving
lots of extra copies, but this way is more efficient. Reference counting is
only applied to tupdescs that are actually in caches --- there seems no need
to use it for tupdescs that are generated in the executor, since they'll go
away during plan shutdown by virtue of being in the per-query memory context.
Neil Conway and Tom Lane
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remove the infrastructure needed to enforce the limit, ie, the global
LRU list of cache entries. On small-to-middling databases this wins
because maintaining the LRU list is a waste of time. On large databases
this wins because it's better to keep more cache entries (we assume
such users can afford to use some more per-backend memory than was
contemplated in the Berkeley-era catcache design). This provides a
noticeable improvement in the speed of psql \d on a 10000-table
database, though it doesn't make it instantaneous.
While at it, use per-catcache settings for the number of hash buckets
per catcache, rather than the former one-size-fits-all value. It's a
bit silly to be using the same number of hash buckets for, eg, pg_am
and pg_attribute. The specific values I used might need some tuning,
but they seem to be in the right ballpark based on CATCACHE_STATS
results from the standard regression tests.
|
| |
|
|
|
|
|
|
|
| |
so on that platform we test for those before the computation and throw
an "out of range" error.
Backpatch to 8.1.X.
|
|
|
|
| |
the other platform-specific cases in ps_status.
|
|
|
|
|
|
| |
'2006-05-24 21:11 Americas/New_York'::timestamptz
Joachim Wieland
|
|
|
|
|
|
|
|
|
|
| |
o remove many WIN32_CLIENT_ONLY defines
o add WIN32_ONLY_COMPILER define
o add 3rd argument to open() for portability
o add include/port/win32_msvc directory for
system includes
Magnus Hagander
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that the Mackert-Lohmann formula applies across all the repetitions of the
nestloop, not just each scan independently. We use the M-L formula to
estimate the number of pages fetched from the index as well as from the table;
that isn't what it was designed for, but it seems reasonably applicable
anyway. This makes large numbers of repetitions look much cheaper than
before, which accords with many reports we've received of overestimation
of the cost of a nestloop. Also, change the index access cost model to
charge random_page_cost per index leaf page touched, while explicitly
not counting anything for access to metapage or upper tree pages. This
may all need tweaking after we get some field experience, but in simple
tests it seems to be giving saner results than before. The main thing
is to get the infrastructure in place to let cost_index() and amcostestimate
functions take repeated scans into account at all. Per my recent proposal.
Note: this patch changes pg_proc.h, but I did not force initdb because
the changes are basically cosmetic --- the system does not look into
pg_proc to decide how to call an index amcostestimate function, and
there's no way to call such a function from SQL at all.
|
|
|
|
|
|
| |
This shouldn't affect simple indexscans much, while for bitmap scans that
are touching a lot of index rows, this seems to bring the estimates more
in line with reality. Per recent discussion.
|
|
|
|
|
|
|
|
| |
assumed that a sequential page fetch has cost 1.0. This patch doesn't
in itself change the system's behavior at all, but it opens the door to
people adopting other units of measurement for EXPLAIN costs. Also, if
we ever decide it's worth inventing per-tablespace access cost settings,
this change provides a workable intellectual framework for that.
|
|
|
|
|
|
|
|
|
|
| |
for LC_MESSAGES; instead, just press forward, leaving the effective setting
at 'C'. There is not any very good reason to complain when we are going
to replace the value soon with whatever postgresql.conf says. This change
should solve the occasionally-reported problem of initdb failing with
'failed to initialize lc_messages'; the current theory is that that is
a reflection of either wrong LANG/LC_MESSAGES or completely broken locale
support.
|
|
|
|
|
| |
the server. Per discussion, there seems no point in a waiting period
before making this required.
|
|
|
|
| |
in every shared library.
|
|
|
|
|
|
|
|
|
|
| |
as this seems only likely to create headaches for module developers. Put
the macro in the pre-existing fmgr.h file instead. Avoid being too cute
about how many fields we can cram into a word, and avoid trying to fetch
from a library we've already unlinked.
Along the way, it occurred to me that the magic block really ought to be
'const' so it can be stored in the program text area. Do the same for
the existing data blocks for PG_FUNCTION_INFO_V1 functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It now only checks four things:
Major version number (7.4 or 8.1 for example)
NAMEDATALEN
FUNC_MAX_ARGS
INDEX_MAX_KEYS
The three constants were chosen because:
1. We document them in the config page in the docs
2. We mark them as changable in pg_config_manual.h
3. Changing any of these will break some of the more popular modules:
FUNC_MAX_ARGS changes fmgr interface, every module uses this NAMEDATALEN
changes syscache interface, every PL as well as tsearch uses this
INDEX_MAX_KEYS breaks tsearch and anything using GiST.
Martijn van Oosterhout
|
| |
|
|
|
|
|
|
|
|
|
| |
and standard_conforming_strings; likewise for the other client programs
that need it. As per previous discussion, a pg_dump dump now conforms
to the standard_conforming_strings setting of the source database.
We don't use E'' syntax in the dump, thereby improving portability of
the SQL. I added a SET escape_strings_warning = off command to keep
the dumps from getting a lot of back-chatter from that.
|
|
|
|
|
|
|
|
|
| |
'off'. This allows pg_dump output with standard_conforming_strings =
'on' to generate proper strings that can be loaded into other databases
without the backslash doubling we typically do. I have added the
dumping of the standard_conforming_strings value to pg_dump.
I also added standard backslash handling for plpgsql.
|
|
|
|
|
|
|
|
| |
and transaction visibility fields of tuples being sorted. These are
always uninteresting in a tuple being sorted (if the fields were actually
selected, they'd have been pulled out into user columns beforehand).
This saves about 24 bytes per row being sorted, which is a useful savings
for any but the widest of sort rows. Per recent discussion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
parser will allow "\'" to be used to represent a literal quote mark. The
"\'" representation has been deprecated for some time in favor of the
SQL-standard representation "''" (two single quote marks), but it has been
used often enough that just disallowing it immediately won't do. Hence
backslash_quote allows the settings "on", "off", and "safe_encoding",
the last meaning to allow "\'" only if client_encoding is a valid server
encoding. That is now the default, and the reason is that in encodings
such as SJIS that allow 0x5c (ASCII backslash) to be the last byte of a
multibyte character, accepting "\'" allows SQL-injection attacks as per
CVE-2006-2314 (further details will be published after release). The
"on" setting is available for backward compatibility, but it must not be
used with clients that are exposed to untrusted input.
Thanks to Akio Ishida and Yasuo Ohgaki for identifying this security issue.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
characters in all cases. Formerly we mostly just threw warnings for invalid
input, and failed to detect it at all if no encoding conversion was required.
The tighter check is needed to defend against SQL-injection attacks as per
CVE-2006-2313 (further details will be published after release). Embedded
zero (null) bytes will be rejected as well. The checks are applied during
input to the backend (receipt from client or COPY IN), so it no longer seems
necessary to check in textin() and related routines; any string arriving at
those functions will already have been validated. Conversion failure
reporting (for characters with no equivalent in the destination encoding)
has been cleaned up and made consistent while at it.
Also, fix a few longstanding errors in little-used encoding conversion
routines: win1251_to_iso, win866_to_iso, euc_tw_to_big5, euc_tw_to_mic,
mic_to_euc_tw were all broken to varying extents.
Patches by Tatsuo Ishii and Tom Lane. Thanks to Akio Ishida and Yasuo Ohgaki
for identifying the security issues.
|
|
|
|
|
|
|
|
|
| |
issued by autovacuum. Add accessor functions to them, and use those in the
pg_stat_*_tables system views.
Catalog version bumped due to changes in the pgstat views and the pgstat file.
Patch from Larry Rosenman, minor improvements by me.
|
| |
|
|
|
|
|
|
|
| |
throw warnings for 100%-SQL-standard constructs, clean up some minor
infelicities, try to un-break ecpg to the best of my ability. (It's not clear
how ecpg is going to find out the setting of standard_conforming_strings,
though.) I think pg_dump still needs work, too.
|
|
|
|
| |
needNewCacheFile flag anymore, it can just be local in RelationCacheInitializePhase2.
|
|
|
|
|
|
|
| |
it's not necessary to have three separate calls anymore. This patch also
fixes things so we don't try to read pg_internal.init until after we've
obtained lock on the target database; which was fairly harmless, but it's
certainly cleaner this way.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The former approach used ExclusiveLock on pg_database, which being a
cluster-wide lock meant only one of these operations could proceed at
a time; worse, it also blocked all incoming connections in ReverifyMyDatabase.
Now that we have LockSharedObject(), we can use locks of different types
applied to databases considered as objects. This allows much more
flexible management of the interlocking: two CREATE DATABASEs need not
block each other, and need not block connections except to the template
database being used. Similarly DROP DATABASE doesn't block unrelated
operations. The locking used in flatfiles.c is also much narrower in
scope than before. Per recent proposal.
|
|
|
|
|
|
|
|
|
|
| |
in various places that were previously doing ad hoc pg_database searches.
This may speed up database-related privilege checks a little bit, but
the main motivation is to eliminate the performance reason for having
ReverifyMyDatabase do such a lot of stuff (viz, avoiding repeat scans
of pg_database during backend startup). The locking reason for having
that routine is about to go away, and it'd be good to have the option
to break it up.
|
|
|
|
| |
text[], int4[], Tsearch2 support for GIN.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the union of its child relations as well. This might have been a good idea
when it was originally coded, but it's a fatally bad idea when inheritance is
being used for partitioning. It's better to have no stats at all than
completely misleading stats. Per report from Mark Liberman.
The bug arguably exists all the way back, but I've only patched HEAD and 8.1
because we weren't particularly trying to support partitioning before 8.1.
Eventually we ought to look at deriving union statistics instead of just
punting, but for now the drop kick looks good.
|
|
|
|
|
|
|
|
| |
input datatypes given, and use this before trying OpernameGetCandidates.
This is faster than the old method when there's an exact match, and it
does not seem materially slower when there's not. And it definitely
makes some of the callers cleaner, because they didn't really want to
know about a list of candidates anyway. Per discussion with Atsushi Ogawa.
|
|
|
|
|
| |
CONNECTION, fix a number of places that were missed (eg pg_dump support),
avoid executing an extra search of pg_database during startup.
|
|
|
|
|
|
| |
support both FOR UPDATE and FOR SHARE in one command, as well as both
NOWAIT and normal WAIT behavior. The more general code is actually
simpler and cleaner.
|
|
|
|
| |
Gevik Babakhani
|
|
|
|
|
|
| |
cases. This was not needed in the existing uses within selfuncs.c, but if
we're gonna export it for general use, the extra generality seems helpful.
Motivated by looking at ltree example.
|
|
|
|
| |
then we should export a reasonable set of the supporting routines too.
|
| |
|
|
|
|
| |
Matteo Beccati
|
|
|
|
|
|
|
| |
thereby saving a visit to the metapage in most index searches/updates.
This wouldn't actually save any I/O (since in the old regime the metapage
generally stayed in cache anyway), but it does provide a useful decrease
in bufmgr traffic in high-contention scenarios. Per my recent proposal.
|
| |
|
|
|
|
| |
Hans-J?rgen Sch?nig
|
|
|
|
|
|
|
|
|
| |
transaction_timestamp() (just like now()).
Also update statement_timeout() to mention it is statement arrival time
that is measured.
Catalog version updated.
|
|
|
|
|
| |
accuracy expected by the regression tests. Per suggestion from
Martijn van Oosterhout.
|
|
|
|
| |
Report from Gevik Babakhani.
|