| Commit message (Collapse) | Author | Age |
|
|
|
| |
relevant location.
|
|
|
|
|
|
|
|
|
|
|
| |
changing semantics too much. statement_timestamp is now set immediately
upon receipt of a client command message, and the various places that used
to do their own gettimeofday() calls to mark command startup are referenced
to that instead. I have also made stats_command_string use that same
value for pg_stat_activity.query_start for both the command itself and
its eventual replacement by <IDLE> or <idle in transaction>. There was
some debate about that, but no argument that seemed convincing enough to
justify an extra gettimeofday() call.
|
|
|
|
|
|
|
|
|
| |
libpq/md5.h, so that there's a clear separation between backend-only
definitions and shared frontend/backend definitions. (Turns out this
is reversing a bad decision from some years ago...) Fix up references
to crypt.h as needed. I looked into moving the code into src/port, but
the headers in src/include/libpq are sufficiently intertwined that it
seems more work than it's worth to do that.
|
|
|
|
|
|
|
| |
current commands; instead, store current-status information in shared
memory. This substantially reduces the overhead of stats_command_string
and also ensures that pg_stat_activity is fully up to date at all times.
Per my recent proposal.
|
|
|
|
|
|
|
|
|
|
| |
by creating a reference-count mechanism, similar to what we did a long time
ago for catcache entries. The back branches have an ugly solution involving
lots of extra copies, but this way is more efficient. Reference counting is
only applied to tupdescs that are actually in caches --- there seems no need
to use it for tupdescs that are generated in the executor, since they'll go
away during plan shutdown by virtue of being in the per-query memory context.
Neil Conway and Tom Lane
|
| |
|
|
|
|
|
|
|
| |
so on that platform we test for those before the computation and throw
an "out of range" error.
Backpatch to 8.1.X.
|
|
|
|
|
|
| |
'2006-05-24 21:11 Americas/New_York'::timestamptz
Joachim Wieland
|
|
|
|
|
|
|
|
|
|
| |
o remove many WIN32_CLIENT_ONLY defines
o add WIN32_ONLY_COMPILER define
o add 3rd argument to open() for portability
o add include/port/win32_msvc directory for
system includes
Magnus Hagander
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that the Mackert-Lohmann formula applies across all the repetitions of the
nestloop, not just each scan independently. We use the M-L formula to
estimate the number of pages fetched from the index as well as from the table;
that isn't what it was designed for, but it seems reasonably applicable
anyway. This makes large numbers of repetitions look much cheaper than
before, which accords with many reports we've received of overestimation
of the cost of a nestloop. Also, change the index access cost model to
charge random_page_cost per index leaf page touched, while explicitly
not counting anything for access to metapage or upper tree pages. This
may all need tweaking after we get some field experience, but in simple
tests it seems to be giving saner results than before. The main thing
is to get the infrastructure in place to let cost_index() and amcostestimate
functions take repeated scans into account at all. Per my recent proposal.
Note: this patch changes pg_proc.h, but I did not force initdb because
the changes are basically cosmetic --- the system does not look into
pg_proc to decide how to call an index amcostestimate function, and
there's no way to call such a function from SQL at all.
|
|
|
|
|
|
|
|
| |
assumed that a sequential page fetch has cost 1.0. This patch doesn't
in itself change the system's behavior at all, but it opens the door to
people adopting other units of measurement for EXPLAIN costs. Also, if
we ever decide it's worth inventing per-tablespace access cost settings,
this change provides a workable intellectual framework for that.
|
|
|
|
|
|
|
|
|
|
| |
for LC_MESSAGES; instead, just press forward, leaving the effective setting
at 'C'. There is not any very good reason to complain when we are going
to replace the value soon with whatever postgresql.conf says. This change
should solve the occasionally-reported problem of initdb failing with
'failed to initialize lc_messages'; the current theory is that that is
a reflection of either wrong LANG/LC_MESSAGES or completely broken locale
support.
|
| |
|
|
|
|
|
|
|
|
|
| |
and standard_conforming_strings; likewise for the other client programs
that need it. As per previous discussion, a pg_dump dump now conforms
to the standard_conforming_strings setting of the source database.
We don't use E'' syntax in the dump, thereby improving portability of
the SQL. I added a SET escape_strings_warning = off command to keep
the dumps from getting a lot of back-chatter from that.
|
|
|
|
|
|
|
|
|
| |
'off'. This allows pg_dump output with standard_conforming_strings =
'on' to generate proper strings that can be loaded into other databases
without the backslash doubling we typically do. I have added the
dumping of the standard_conforming_strings value to pg_dump.
I also added standard backslash handling for plpgsql.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
characters in all cases. Formerly we mostly just threw warnings for invalid
input, and failed to detect it at all if no encoding conversion was required.
The tighter check is needed to defend against SQL-injection attacks as per
CVE-2006-2313 (further details will be published after release). Embedded
zero (null) bytes will be rejected as well. The checks are applied during
input to the backend (receipt from client or COPY IN), so it no longer seems
necessary to check in textin() and related routines; any string arriving at
those functions will already have been validated. Conversion failure
reporting (for characters with no equivalent in the destination encoding)
has been cleaned up and made consistent while at it.
Also, fix a few longstanding errors in little-used encoding conversion
routines: win1251_to_iso, win866_to_iso, euc_tw_to_big5, euc_tw_to_mic,
mic_to_euc_tw were all broken to varying extents.
Patches by Tatsuo Ishii and Tom Lane. Thanks to Akio Ishida and Yasuo Ohgaki
for identifying the security issues.
|
|
|
|
|
|
|
|
|
| |
issued by autovacuum. Add accessor functions to them, and use those in the
pg_stat_*_tables system views.
Catalog version bumped due to changes in the pgstat views and the pgstat file.
Patch from Larry Rosenman, minor improvements by me.
|
| |
|
|
|
|
| |
text[], int4[], Tsearch2 support for GIN.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the union of its child relations as well. This might have been a good idea
when it was originally coded, but it's a fatally bad idea when inheritance is
being used for partitioning. It's better to have no stats at all than
completely misleading stats. Per report from Mark Liberman.
The bug arguably exists all the way back, but I've only patched HEAD and 8.1
because we weren't particularly trying to support partitioning before 8.1.
Eventually we ought to look at deriving union statistics instead of just
punting, but for now the drop kick looks good.
|
|
|
|
|
|
|
|
| |
input datatypes given, and use this before trying OpernameGetCandidates.
This is faster than the old method when there's an exact match, and it
does not seem materially slower when there's not. And it definitely
makes some of the callers cleaner, because they didn't really want to
know about a list of candidates anyway. Per discussion with Atsushi Ogawa.
|
|
|
|
|
| |
CONNECTION, fix a number of places that were missed (eg pg_dump support),
avoid executing an extra search of pg_database during startup.
|
|
|
|
|
|
| |
support both FOR UPDATE and FOR SHARE in one command, as well as both
NOWAIT and normal WAIT behavior. The more general code is actually
simpler and cleaner.
|
|
|
|
| |
Gevik Babakhani
|
|
|
|
|
|
| |
cases. This was not needed in the existing uses within selfuncs.c, but if
we're gonna export it for general use, the extra generality seems helpful.
Motivated by looking at ltree example.
|
|
|
|
| |
then we should export a reasonable set of the supporting routines too.
|
| |
|
|
|
|
| |
Matteo Beccati
|
|
|
|
|
|
|
|
|
| |
transaction_timestamp() (just like now()).
Also update statement_timeout() to mention it is statement arrival time
that is measured.
Catalog version updated.
|
|
|
|
|
| |
accuracy expected by the regression tests. Per suggestion from
Martijn van Oosterhout.
|
|
|
|
| |
Report from Gevik Babakhani.
|
|
|
|
|
|
|
| |
not named ones, and replace linear searches of the list with array indexing.
The named-parameter support has been dead code for many years anyway,
and recent profiling suggests that the searching was costing a noticeable
amount of performance for complex queries.
|
|
|
|
|
|
|
|
| |
of rejecting palloc(0). Also, tweak like_selectivity() to avoid assuming
the presented pattern is nonempty; although that assumption is valid,
it doesn't really help much, and the new coding is more correct anyway
since it properly handles redundant wildcards. In combination these
changes should eliminate a Coverity warning noted by Martijn.
|
|
|
|
| |
our to_* functions were not handling that.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
alternatives ("|" symbol). The original coding allowed the added ^ and $
constraints to be absorbed into the first and last alternatives, producing
a pattern that would match more than it should. Per report from Eric Noriega.
I also changed the pattern to add an ARE director ("***:"), ensuring that
SIMILAR TO patterns do not change behavior if regex_flavor is changed. This
is necessary to make the non-capturing parentheses work, and seems like a
good idea on general principles.
Back-patched as far as 7.4. 7.3 also has the bug, but a fix seems impractical
because that version's regex engine doesn't have non-capturing parens.
|
|
|
|
|
|
|
|
|
| |
when trying to locate the referent of a RECORD variable. This fixes the
'record type has not been registered' failure reported by Stefan
Kaltenbrunner about a month ago. A side effect of the way I chose to
fix it is that most variable references in join conditions will now be
properly labeled with the variable's source table name, instead of the
not-too-helpful 'outer' or 'inner' we used to use.
|
|
|
|
|
|
|
|
|
|
|
| |
that apply the necessary domain constraint checks immediately. This fixes
cases where domain constraints went unchecked for statement parameters,
PL function local variables and results, etc. We can also eliminate existing
special cases for domains in places that had gotten it right, eg COPY.
Also, allow domains over domains (base of a domain is another domain type).
This almost worked before, but was disallowed because the original patch
hadn't gotten it quite right.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functions are not strict, they will be called (passing a NULL first parameter)
during any attempt to input a NULL value of their datatype. Currently, all
our input functions are strict and so this commit does not change any
behavior. However, this will make it possible to build domain input functions
that centralize checking of domain constraints, thereby closing numerous holes
in our domain support, as per previous discussion.
While at it, I took the opportunity to introduce convenience functions
InputFunctionCall, OutputFunctionCall, etc to use in code that calls I/O
functions. This eliminates a lot of grotty-looking casts, but the main
motivation is to make it easier to grep for these places if we ever need
to touch them again.
|
|
|
|
|
|
| |
supplied
for various mistakes involving INSERT and UPDATE target columns.
|
|
|
|
|
| |
non-NULL: palloc() ereports on OOM, so we can safely assume it returns a
valid pointer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original coding stored the raw parser output (ColumnDef and TypeName
nodes) which was ugly, bulky, and wrong because it failed to create any
dependency on the referenced datatype --- and in fact would not track type
renamings and suchlike. Instead store a list of column type OIDs in the
RTE.
Also fix up general failure of recordDependencyOnExpr to do anything sane
about recording dependencies on datatypes. While there are many cases where
there will be an indirect dependency (eg if an operator returns a datatype,
the dependency on the operator is enough), we do have to record the datatype
as a separate dependency in examples like CoerceToDomain.
initdb forced because of change of stored rules.
|
|
|
|
|
|
|
|
|
|
|
| |
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
|
|
|
|
| |
derived from Jan's.
|
|
|
|
|
|
| |
similar constants if they were not previously defined. All these
constants must be defined by limits.h according to C89, so we can
safely assume they are present.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
var_samp(), stddev_pop(), and stddev_samp(). var_samp() and stddev_samp()
are just renamings of the historical Postgres aggregates variance() and
stddev() -- the latter names have been kept for backward compatibility.
This patch includes updates for the documentation and regression tests.
The catversion has been bumped.
NB: SQL2003 requires that DISTINCT not be specified for any of these
aggregates. Per discussion on -patches, I have NOT implemented this
restriction: if the user asks for stddev(DISTINCT x), presumably they
know what they are doing.
|
|
|
|
|
|
| |
not likely ever to be implemented seeing it's been removed from SQL2003.
This allows getting rid of the 'filter' version of yylex() that we had in
parser.c, which should save at least a few microseconds in parsing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- new function justify_interval(interval)
- modified function justify_hours(interval)
- modified function justify_days(interval)
These functions are defined to meet the requirements as discussed in
this thread. Specifically:
- justify_hours makes certain the sign bit on the hours
matches the sign bit on the days. It only checks the
sign bit on the days, and not the months, when
determining if the hours should be positive or negative.
After the call, -24 < hours < 24.
- justify_days makes certain the sign bit on the days
matches the sign bit on the months. It's behavior does
not depend on the hours, nor does it modify the hours.
After the call, -30 < days < 30.
- justify_interval makes sure the sign bits on all three
fields months, days, and hours are all the same. After
the call, -24 < hours < 24 AND -30 < days < 30.
Mark Dilger
|
| |
|
| |
|