| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, we used a 'Lispy' linked list implementation: a "list" was
merely a pointer to the head node of the list. The problem with that
design is that it makes lappend() and length() linear time. This patch
fixes that problem (and others) by maintaining a count of the list
length and a pointer to the tail node along with each head node pointer.
A "list" is now a pointer to a structure containing some meta-data
about the list; the head and tail pointers in that structure refer
to ListCell structures that maintain the actual linked list of nodes.
The function names of the list API have also been changed to, I hope,
be more logically consistent. By default, the old function names are
still available; they will be disabled-by-default once the rest of
the tree has been updated to use the new API names.
|
|
|
|
|
|
|
|
|
|
| |
functions. This allows these functions to work correctly with Unicode and
other multibyte encodings. Per prior discussion.
Also, revert my earlier change to move installation path mashing from
Makefile.global to configure. Turns out not to work well because configure
script is working with unexpanded variables, and so fails to match in
cases where it should match.
|
|
|
|
|
|
| |
several different module Makefiles with it. Also, do any adjustment
of installation paths during configure, rather than every time Makefile.global
is read.
|
| |
|
|
|
|
|
|
| |
and should do now that we control our own destiny for timezone handling,
but this commit gets the bulk of the picayune diffs in place.
Magnus Hagander and Tom Lane.
|
|
|
|
|
| |
right error code previously, and this patch applies an analogous change
to numeric_sqrt().
|
| |
|
|
|
|
|
|
|
| |
Create new get_* functions to access compiled-in paths and adjust if
relative installs are to be used.
Clean up substitute_libpath_macro() code.
|
|
|
|
| |
error codes for certain error conditions, as specified by SQL2003.
|
|
|
|
|
|
|
|
|
| |
a variant of the function for the 'numeric' datatype; it would be possible
to add additional variants for other datatypes, but I haven't done so yet.
This commit includes regression tests and minimal documentation; if we
want developers to actually use this function in applications, we'll
probably need to document what it does more fully.
|
|
|
|
| |
backend startup.
|
|
|
|
|
|
| |
by Ken Ashcraft's report. I think there is no actual bug here since if
the int32 value does wrap a little bit, palloc will still reject it.
Still it's better that the code be obviously correct.
|
|
|
|
|
|
|
|
|
|
|
|
| |
all the code that looks for other binaries. I move FindExec into
port/exec.c (and renamed it to find_my_binary()). I also added
find_other_binary that looks for another binary in the same directory as
the calling program, and checks the version string.
The only behavior change was that initdb and pg_dump would look in the
hard-coded bindir directory if it can't find the requested binary in the
same directory as the caller. The new code throws an error. The old
behavior seemed too error prone for version mismatches.
|
|
|
|
|
|
|
|
|
|
| |
rather than allowing them only in a few special cases as before. In
particular you can now pass a ROW() construct to a function that accepts
a rowtype parameter. Internal generation of RowExprs fixes a number of
corner cases that used to not work very well, such as referencing the
whole-row result of a JOIN or subquery. This represents a further step in
the work I started a month or so back to make rowtype values into
first-class citizens.
|
|
|
|
|
|
|
|
|
| |
costing us lots more to maintain than it was worth. On shared tables
it was of exactly zero benefit because we couldn't trust it to be
up to date. On temp tables it sometimes saved an lseek, but not often
enough to be worth getting excited about. And the real problem was that
we forced an lseek on every relcache flush in order to update the field.
So all in all it seems best to lose the complexity.
|
| |
|
|
|
|
|
| |
by the SQL spec and by our parser. Thanks to Jonathan Scott for finding
this longstanding error.
|
|
|
|
|
|
|
| |
have a more proper GUC based test.
Also change error return code to ERRCODE_INVALID_PARAMETER_VALUE so it
matches the old error return code.
|
|
|
|
| |
per-query stage stats.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
conversion of basic ASCII letters. Remove all uses of strcasecmp and
strncasecmp in favor of new functions pg_strcasecmp and pg_strncasecmp;
remove most but not all direct uses of toupper and tolower in favor of
pg_toupper and pg_tolower. These functions use the same notions of
case folding already developed for identifier case conversion. I left
the straight locale-based folding in place for situations where we are
just manipulating user data and not trying to match it to built-in
strings --- for example, the SQL upper() function is still locale
dependent. Perhaps this will prove not to be what's wanted, but at
the moment we can initdb and pass regression tests in Turkish locale.
|
|
|
|
|
|
|
| |
modify. Also fix a passel of problems with ALTER TABLE CLUSTER ON:
failure to check that the index is safe to cluster on (or even belongs
to the indicated rel, or even exists), and failure to broadcast a relcache
flush event when changing an index's state.
|
|
|
|
| |
time_t; on some platforms they are not the same width. Per Manfred Koizar.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ALTER ... ADD COLUMN with defaults and NOT NULL constraints works per SQL
spec. A default is implemented by rewriting the table with the new value
stored in each row.
* ALTER COLUMN TYPE. You can change a column's datatype to anything you
want, so long as you can specify how to convert the old value. Rewrites
the table. (Possible future improvement: optimize no-op conversions such
as varchar(N) to varchar(N+1).)
* Multiple ALTER actions in a single ALTER TABLE command. You can perform
any number of column additions, type changes, and constraint additions with
only one pass over the table contents.
Basic documentation provided in ALTER TABLE ref page, but some more docs
work is needed.
Original patch from Rod Taylor, additional work from Tom Lane.
|
|
|
|
|
|
|
|
|
|
|
| |
> Please find a attached a small patch that adds accessor functions
> for "aclitem" so that it is not an opaque datatype.
>
> I needed these functions to browse aclitems from user land. I can load
> them when necessary, but it seems to me that these accessors for a
> backend type belong to the backend, so I submit them.
>
> Fabien Coelho
|
|
|
|
|
|
|
|
|
|
| |
for "aclitem" so that it is not an opaque datatype.
I needed these functions to browse aclitems from user land. I can load
them when necessary, but it seems to me that these accessors for a
backend type belong to the backend, so I submit them.
Fabien Coelho
|
|
|
|
|
|
| |
report socket errors.
Magnus Hagander
|
| |
|
|
|
|
| |
Fix for code originally added for 7.5.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* removed a few redundant defines
* get_user_name safe under win32
* rationalized pipe read EOF for win32 (UPDATED PATCH USED)
* changed all backend instances of sleep() to pg_usleep
- except for the SLEEP_ON_ASSERT in assert.c, as it would exceed a
32-bit long [Note to patcher: If a SLEEP_ON_ASSERT of 2000 seconds is
acceptable, please replace with pg_usleep(2000000000L)]
I added a comment to that part of the code:
/*
* It would be nice to use pg_usleep() here, but only does 2000 sec
* or 33 minutes, which seems too short.
*/
sleep(1000000);
Claudio Natoli
|
| |
|
|
|
|
| |
calling proc_exit() directly. This should make SIGTERM more reliable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"millennium" date part implementation in postgresql, both in the code
and the documentation, so that it conforms to the official definition.
If you do not agree with the official definition, please send your
complaint to "pope@vatican.org". I'm not responsible for them;-)
With the previous version, the centuries and millenniums had a wrong
number and started the wrong year. Moreover century number 0, which does
not exist in reality, lasted 200 years. Also, millennium number 0 lasted
2000 years.
If you want postgresql to have it's own definition of "century" and
"millennium" that does not conform to the one of the society, just give
them another name. I would suggest "pgCENTURY" and "pgMILLENNIUM";-)
IMO, if someone may use the options, it means that postgresql is used for
historical data, so it make sense to have an historical definition. Also,
I just want to divide the year by 100 or 1000, I can do that quite easily.
BACKWARD INCOMPATIBLE CHANGE
Fabien Coelho - coelho@cri.ensmp.fr
|
|
|
|
| |
crash with debug in log_statement patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
> >>with allowed values of "all, mod, ddl, none" with default "none".
OK, here is a patch that implements #1. Here is sample output:
test=> set client_min_messages = 'log';
SET
test=> set log_statement = 'mod';
SET
test=> select 1;
?column?
----------
1
(1 row)
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> copy test from '/tmp/x';
LOG: statement: copy test from '/tmp/x';
ERROR: relation "test" does not exist
test=> copy test to '/tmp/x';
ERROR: relation "test" does not exist
test=> prepare xx as select 1;
PREPARE
test=> prepare xx as update x set y=1;
LOG: statement: prepare xx as update x set y=1;
ERROR: relation "x" does not exist
test=> explain analyze select 1;;
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.006..0.007 rows=1 loops=1)
Total runtime: 0.046 ms
(2 rows)
test=> explain analyze update test set x=1;
LOG: statement: explain analyze update test set x=1;
ERROR: relation "test" does not exist
test=> explain update test set x=1;
ERROR: relation "test" does not exist
It checks PREPARE and EXECUTE ANALYZE too. The log_statement values are
'none', 'mod', 'ddl', and 'all'. For 'all', it prints before the query
is parsed, and for ddl/mod, it does it right after parsing using the
node tag (or command tag for CREATE/ALTER/DROP), so any non-parse errors
will print after the log line.
|
|
|
|
|
|
| |
variable to control logoutput location on Unix and Win32.
Magnus Hagander
|
|
|
|
|
| |
handle new postgresql.conf values with SIGHUP better by better enforcing
USERLIMIT settings on existing non-super-user backends.
|
|
|
|
| |
HPUX 11 ...)
|
| |
|
|
|
|
|
|
|
|
| |
results with tuples as ordinary varlena Datums. This commit does not
in itself do much for us, except eliminate the horrid memory leak
associated with evaluation of whole-row variables. However, it lays the
groundwork for allowing composite types as table columns, and perhaps
some other useful features as well. Per my proposal of a few days ago.
|
| |
|
|
|
|
|
|
|
| |
Fix to_char(year) for BC dates. Previously it returned one less than
the current year.
Add documentation mentioning that there is no 0 AD.
|
|
|
|
|
|
| |
is measured in kilobytes and checked against actual physical execution
stack depth, as per my proposal of 30-Dec. This gives us a fairly
bulletproof defense against crashing due to runaway recursive functions.
|
|
|
|
|
| |
parameter description: postgresql.conf is not the place for
documentating the functionality of a GUC var.
|
|
|
|
|
|
|
|
| |
listen_addresses parameter, as per recent discussion. The default behavior
is now to listen on localhost, which eliminates the need for the -i
postmaster switch in many scenarios.
Andrew Dunstan
|
|
|
|
| |
followup to complaint from Korean User's Group.
|
|
|
|
| |
Claudio Natoli
|
|
|
|
|
| |
flesh out the index operator classes to include these. In passing,
fix erroneous volatility marking of ACL functions.
|
|
|
|
|
| |
errors in internally-generated queries, such as those submitted by
plpgsql functions. Per recent discussions with Fabien Coelho.
|
|
|
|
|
|
| |
of fighting it, avoid hard-wired (and wrong) assumption about max length
of prefix, cause %l to actually work as documented, don't compute data
we may not need.
|
|
|
|
|
|
|
|
| |
TID (heap position). This doesn't do anything to the validity of the
finished index, but by pretending to qsort() that there are no really
equal keys in the sort, we can avoid performance problems with qsort
implementations that have trouble with large numbers of equal keys.
Patch from Manfred Koizar.
|