aboutsummaryrefslogtreecommitdiff
path: root/src/interfaces/libpq/fe-exec.c
Commit message (Collapse)AuthorAge
* Change TRUE/FALSE to true/falsePeter Eisentraut2017-11-08
| | | | | | | | | | | | | | The lower case spellings are C and C++ standard and are used in most parts of the PostgreSQL sources. The upper case spellings are only used in some files/modules. So standardize on the standard spellings. The APIs for ICU, Perl, and Windows define their own TRUE and FALSE, so those are left as is when using those APIs. In code comments, we use the lower-case spelling for the C concepts and keep the upper-case spelling for the SQL concepts. Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
* Reduce excessive dereferencing of function pointersPeter Eisentraut2017-09-07
| | | | | | | | | | | | It is equivalent in ANSI C to write (*funcptr) () and funcptr(). These two styles have been applied inconsistently. After discussion, we'll use the more verbose style for plain function pointer variables, to make it clear that it's a variable, and the shorter style when the function pointer is in a struct (s.func() or s->func()), because then it's clear that it's not a plain function name, and otherwise the excessive punctuation makes some of those invocations hard to read. Discussion: https://www.postgresql.org/message-id/f52c16db-14ed-757d-4b48-7ef360b1631d@2ndquadrant.com
* Teach libpq to detect integer overflow in the row count of a PGresult.Tom Lane2017-08-29
| | | | | | | | | | | | | | | | | | | | | | | | | | Adding more than 1 billion rows to a PGresult would overflow its ntups and tupArrSize fields, leading to client crashes. It'd be desirable to use wider fields on 64-bit machines, but because all of libpq's external APIs use plain "int" for row counters, that's going to be hard to accomplish without an ABI break. Given the lack of complaints so far, and the general pain that would be involved in using such huge PGresults, let's settle for just preventing the overflow and reporting a useful error message if it does happen. Also, for a couple more lines of code we can increase the threshold of trouble from INT_MAX/2 to INT_MAX rows. To do that, refactor pqAddTuple() to allow returning an error message that replaces the default assumption that it failed because of out-of-memory. Along the way, fix PQsetvalue() so that it reports all failures via pqInternalNotice(). It already did so in the case of bad field number, but neglected to report anything for other error causes. Because of the potential for crashes, this seems like a back-patchable bug fix, despite the lack of field reports. Michael Paquier, per a complaint from Igor Korot. Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com
* Phase 3 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Phase 2 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Initial pgindent run with pg_bsd_indent version 2.0.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new indent version includes numerous fixes thanks to Piotr Stefaniak. The main changes visible in this commit are: * Nicer formatting of function-pointer declarations. * No longer unexpectedly removes spaces in expressions using casts, sizeof, or offsetof. * No longer wants to add a space in "struct structname *varname", as well as some similar cases for const- or volatile-qualified pointers. * Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely. * Fixes bug where comments following declarations were sometimes placed with no space separating them from the code. * Fixes some odd decisions for comments following case labels. * Fixes some cases where comments following code were indented to less than the expected column 33. On the less good side, it now tends to put more whitespace around typedef names that are not listed in typedefs.list. This might encourage us to put more effort into typedef name collection; it's not really a bug in indent itself. There are more changes coming after this round, having to do with comment indentation and alignment of lines appearing within parentheses. I wanted to limit the size of the diffs to something that could be reviewed without one's eyes completely glazing over, so it seemed better to split up the changes as much as practical. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Spelling fixes in code commentsPeter Eisentraut2017-03-14
| | | | From: Josh Soref <jsoref@gmail.com>
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* In PQsendQueryStart(), avoid leaking any left-over async result.Tom Lane2016-10-10
| | | | | | | | | | | Ordinarily there would not be an async result sitting around at this point, but it appears that in corner cases there can be. Considering all the work we're about to launch, it's hardly going to cost anything noticeable to check. It's been like this forever, so back-patch to all supported branches. Report: <CAD-Qf1eLUtBOTPXyFQGW-4eEsop31tVVdZPu4kL9pbQ6tJPO8g@mail.gmail.com>
* Add a nonlocalized version of the severity field to client error messages.Tom Lane2016-08-26
| | | | | | | | | | | | | | | | | | | | This has been requested a few times, but the use-case for it was never entirely clear. The reason for adding it now is that transmission of error reports from parallel workers fails when NLS is active, because pq_parse_errornotice() wrongly assumes that the existing severity field is nonlocalized. There are other ways we could have fixed that, but the other options were basically kluges, whereas this way provides something that's at least arguably a useful feature along with the bug fix. Per report from Jakob Egger. Back-patch into 9.6, because otherwise parallel query is essentially unusable in non-English locales. The problem exists in 9.5 as well, but we don't want to risk changing on-the-wire behavior in 9.5 (even though the possibility of new error fields is specifically called out in the protocol document). It may be sufficient to leave the issue unfixed in 9.5, given the very limited usefulness of pq_parse_errornotice in that version. Discussion: <A88E0006-13CB-49C6-95CC-1A77D717213C@eggerapps.at>
* Teach libpq to decode server version correctly from future servers.Tom Lane2016-08-05
| | | | | | | | | | | | | | | | | | | Beginning with the next development cycle, PG servers will report two-part not three-part version numbers. Fix libpq so that it will compute the correct numeric representation of such server versions for reporting by PQserverVersion(). It's desirable to get this into the field and back-patched ASAP, so that older clients are more likely to understand the new server version numbering by the time any such servers are in the wild. (The results with an old client would probably not be catastrophic anyway for a released server; for example "10.1" would be interpreted as 100100 which would be wrong in detail but would not likely cause an old client to misbehave badly. But "10devel" or "10beta1" would result in sversion==0 which at best would result in disabling all use of modern features.) Extracted from a patch by Peter Eisentraut; comments added by me Patch: <802ec140-635d-ad86-5fdf-d3af0e260c22@2ndquadrant.com>
* Add libpq support for recreating an error message with different verbosity.Tom Lane2016-04-03
| | | | | | | | | | | | | | | | | | | | | Often, upon getting an unexpected error in psql, one's first wish is that the verbosity setting had been higher; for example, to be able to see the schema-name field or the server code location info. Up to now the only way has been to adjust the VERBOSITY variable and repeat the failing query. That's a pain, and it doesn't work if the error isn't reproducible. This commit adds support in libpq for regenerating the error message for an existing error PGresult at any desired verbosity level. This is almost just a matter of refactoring the existing code into a subroutine, but there is one bit of possibly-needed information that was not getting put into PGresults: the text of the last query sent to the server. We must add that string to the contents of an error PGresult. But we only need to save it if it might be used, which with the existing error-formatting code only happens if there is a PG_DIAG_STATEMENT_POSITION error field, which is probably pretty rare for errors in production situations. So really the overhead when the feature isn't used should be negligible. Alex Shulgin, reviewed by Daniel Vérité, some improvements by me
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Fix unwanted flushing of libpq's input buffer when socket EOF is seen.Tom Lane2015-11-12
| | | | | | | | | | | | | | | | | | | | | | | | In commit 210eb9b743c0645d I centralized libpq's logic for closing down the backend communication socket, and made the new pqDropConnection routine always reset the I/O buffers to empty. Many of the call sites previously had not had such code, and while that amounted to an oversight in some cases, there was one place where it was intentional and necessary *not* to flush the input buffer: pqReadData should never cause that to happen, since we probably still want to process whatever data we read. This is the true cause of the problem Robert was attempting to fix in c3e7c24a1d60dc6a, namely that libpq no longer reported the backend's final ERROR message before reporting "server closed the connection unexpectedly". But that only accidentally fixed it, by invoking parseInput before the input buffer got flushed; and very likely there are timing scenarios where we'd still lose the message before processing it. To fix, pass a flag to pqDropConnection to tell it whether to flush the input buffer or not. On review I think flushing is actually correct for every other call site. Back-patch to 9.3 where the problem was introduced. In HEAD, also improve the comments added by c3e7c24a1d60dc6a.
* libpq: Notice errors a backend may have sent just before dying.Robert Haas2015-11-12
| | | | | | | | | | | | At least since the introduction of Hot Standby, the backend has sometimes sent fatal errors even when no client query was in progress, assuming that the client would receive it. However, pqHandleSendFailure was not in sync with this assumption, and only tries to catch notices and notifies. Add a parseInput call to the loop there to fix. Andres Freund suggested the fix. Comments are by me. Reviewed by Michael Paquier.
* Fix documentation for libpq's PQfn().Tom Lane2015-03-08
| | | | | | | | | | | | The SGML docs claimed that 1-byte integers could be sent or received with the "isint" options, but no such behavior has ever been implemented in pqGetInt() or pqPutInt(). The in-code documentation header for PQfn() was even less in tune with reality, and the code itself used parameter names matching neither the SGML docs nor its libpq-fe.h declaration. Do a bit of additional wordsmithing on the SGML docs while at it. Since the business about 1-byte integers is a clear documentation bug, back-patch to all supported branches.
* Some more FLEXIBLE_ARRAY_MEMBER fixes.Tom Lane2015-02-21
|
* Replace a bunch more uses of strncpy() with safer coding.Tom Lane2015-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | strncpy() has a well-deserved reputation for being unsafe, so make an effort to get rid of nearly all occurrences in HEAD. A large fraction of the remaining uses were passing length less than or equal to the known strlen() of the source, in which case no null-padding can occur and the behavior is equivalent to memcpy(), though doubtless slower and certainly harder to reason about. So just use memcpy() in these cases. In other cases, use either StrNCpy() or strlcpy() as appropriate (depending on whether padding to the full length of the destination buffer seems useful). I left a few strncpy() calls alone in the src/timezone/ code, to keep it in sync with upstream (the IANA tzcode distribution). There are also a few such calls in ecpg that could possibly do with more analysis. AFAICT, none of these changes are more than cosmetic, except for the four occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength source leads to a non-null-terminated destination buffer and ensuing misbehavior. These don't seem like security issues, first because no stack clobber is possible and second because if your values of sslcert etc are coming from untrusted sources then you've got problems way worse than this. Still, it's undesirable to have unpredictable behavior for overlength inputs, so back-patch those four changes to all active branches.
* Update copyright for 2015Bruce Momjian2015-01-06
| | | | Backpatch certain files through 9.0
* pgindent run for 9.4Bruce Momjian2014-05-06
| | | | | This includes removing tabs after periods in C comments, which was applied to back branches, so this change should not effect backpatching.
* libpq: use pgsocket for socket values, for portabilityBruce Momjian2014-04-16
| | | | | | | | | | Previously, 'int' was used for socket values in libpq, but socket values are unsigned on Windows. This is a style correction. Initial patch and previous PGINVALID_SOCKET initial patch by Joel Jacobson, modified by me Report from PVS-Studio
* Fix whitespacePeter Eisentraut2014-03-03
|
* Various Coverity-spotted fixesStephen Frost2014-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | A number of issues were identified by the Coverity scanner and are addressed in this patch. None of these appear to be security issues and many are mostly cosmetic changes. Short comments for each of the changes follows. Correct the semi-colon placement in be-secure.c regarding SSL retries. Remove a useless comparison-to-NULL in proc.c (value is dereferenced prior to this check and therefore can't be NULL). Add checking of chmod() return values to initdb. Fix a couple minor memory leaks in initdb. Fix memory leak in pg_ctl- involves free'ing the config file contents. Use an int to capture fgetc() return instead of an enum in pg_dump. Fix minor memory leaks in pg_dump. (note minor change to convertOperatorReference()'s API) Check fclose()/remove() return codes in psql. Check fstat(), find_my_exec() return codes in psql. Various ECPG memory leak fixes. Check find_my_exec() return in ECPG. Explicitly ignore pqFlush return in libpq error-path. Change PQfnumber() to avoid doing an strdup() when no changes required. Remove a few useless check-against-NULL's (value deref'd beforehand). Check rmtree(), malloc() results in pg_regress. Also check get_alternative_expectfile() return in pg_regress.
* Improve libpq's error recovery for connection loss during COPY.Tom Lane2014-02-12
| | | | | | | | | | | | | | | | | | | | | | | | In pqSendSome, if the connection is already closed at entry, discard any queued output data before returning. There is no possibility of ever sending the data, and anyway this corresponds to what we'd do if we'd detected a hard error while trying to send(). This avoids possible indefinite bloat of the output buffer if the application keeps trying to send data (or even just keeps trying to do PQputCopyEnd, as psql indeed will). Because PQputCopyEnd won't transition out of PGASYNC_COPY_IN state until it's successfully queued the COPY END message, and pqPutMsgEnd doesn't distinguish a queuing failure from a pqSendSome failure, this omission allowed an infinite loop in psql if the connection closure occurred when we had at least 8K queued to send. It might be worth refactoring so that we can make that distinction, but for the moment the other changes made here seem to offer adequate defenses. To guard against other variants of this scenario, do not allow PQgetResult to return a PGRES_COPY_XXX result if the connection is already known dead. Make sure it returns PGRES_FATAL_ERROR instead. Per report from Stephen Frost. Back-patch to all active branches.
* Update copyright for 2014Bruce Momjian2014-01-07
| | | | | Update all files in head, and files COPYRIGHT and legal.sgml in all back branches.
* Additional spelling correctionsStephen Frost2013-06-03
| | | | | | A few more minor spelling corrections, no functional changes. Thom Brown
* pgindent run for release 9.3Bruce Momjian2013-05-29
| | | | | This is the first run of the Perl-based pgindent script. Also update pgindent instructions.
* Update copyrights for 2013Bruce Momjian2013-01-01
| | | | | Fully update git head, and update back branches in ./COPYRIGHT and legal.sgml files.
* Allow a streaming replication standby to follow a timeline switch.Heikki Linnakangas2012-12-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, streaming replication would refuse to start replicating if the timeline in the primary doesn't exactly match the standby. The situation where it doesn't match is when you have a master, and two standbys, and you promote one of the standbys to become new master. Promoting bumps up the timeline ID, and after that bump, the other standby would refuse to continue. There's significantly more timeline related logic in streaming replication now. First of all, when a standby connects to primary, it will ask the primary for any timeline history files that are missing from the standby. The missing files are sent using a new replication command TIMELINE_HISTORY, and stored in standby's pg_xlog directory. Using the timeline history files, the standby can follow the latest timeline present in the primary (recovery_target_timeline='latest'), just as it can follow new timelines appearing in an archive directory. START_REPLICATION now takes a TIMELINE parameter, to specify exactly which timeline to stream WAL from. This allows the standby to request the primary to send over WAL that precedes the promotion. The replication protocol is changed slightly (in a backwards-compatible way although there's little hope of streaming replication working across major versions anyway), to allow replication to stop when the end of timeline reached, putting the walsender back into accepting a replication command. Many thanks to Amit Kapila for testing and reviewing various versions of this patch.
* Add runtime checks for number of query parameters passed to libpq functions.Heikki Linnakangas2012-08-13
| | | | | | | | | | | The maximum number of parameters supported by the FE/BE protocol is 65535, as it's transmitted as a 16-bit unsigned integer. However, the nParams arguments to libpq functions are all of type 'int'. We can't change the signature of libpq functions, but a simple bounds check is in order to make it more clear what's going wrong if you try to pass more than 65535 parameters. Per complaint from Jim Vanns.
* Replace libpq's "row processor" API with a "single row" mode.Tom Lane2012-08-02
| | | | | | | | | | | | | | | | | | | | | After taking awhile to digest the row-processor feature that was added to libpq in commit 92785dac2ee7026948962cd61c4cd84a2d052772, we've concluded it is over-complicated and too hard to use. Leave the core infrastructure changes in place (that is, there's still a row processor function inside libpq), but remove the exposed API pieces, and instead provide a "single row" mode switch that causes PQgetResult to return one row at a time in separate PGresult objects. This approach incurs more overhead than proper use of a row processor callback would, since construction of a PGresult per row adds extra cycles. However, it is far easier to use and harder to break. The single-row mode still affords applications the primary benefit that the row processor API was meant to provide, namely not having to accumulate large result sets in memory before processing them. Preliminary testing suggests that we can probably buy back most of the extra cycles by micro-optimizing construction of the extra results, but that task will be left for another day. Marko Kreen
* Run pgindent on 9.2 source tree in preparation for first 9.3Bruce Momjian2012-06-10
| | | | commit-fest.
* Lots of doc corrections.Robert Haas2012-04-23
| | | | Josh Kupershmidt
* Add a "row processor" API to libpq for better handling of large results.Tom Lane2012-04-04
| | | | | | | | | | | Traditionally libpq has collected an entire query result before passing it back to the application. That provides a simple and transactional API, but it's pretty inefficient for large result sets. This patch allows the application to process each row on-the-fly instead of accumulating the rows into the PGresult. Error recovery becomes a bit more complex, but often that tradeoff is well worth making. Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
* Update copyright notices for year 2012.Bruce Momjian2012-01-01
|
* Tweak PQresStatus() to avoid a clang compiler warning.Robert Haas2011-08-05
| | | | | | | | The previous test for status < 0 test is in fact testing nothing if the compiler considers an enum to be an unsigned data type. clang doesn't like tautologies, so do this instead. Report by Peter Geoghegan, fix as suggested by Tom Lane.
* Fix PQsetvalue() to avoid possible crash when adding a new tuple.Tom Lane2011-07-21
| | | | | | | | | | | | | | | PQsetvalue unnecessarily duplicated the logic in pqAddTuple, and didn't duplicate it exactly either --- pqAddTuple does not care what is in the tuple-pointer array positions beyond the last valid entry, whereas the code in PQsetvalue assumed such positions would contain NULL. This led to possible crashes if PQsetvalue was applied to a PGresult that had previously been enlarged with pqAddTuple, for instance one built from a server query. Fix by relying on pqAddTuple instead of duplicating logic, and not assuming anything about the contents of res->tuples[res->ntups]. Back-patch to 8.4, where PQsetvalue was introduced. Andrew Chernow
* pgindent run before PG 9.1 beta 1.Bruce Momjian2011-04-10
|
* Adjust error message, now that we expect other message types than connectionHeikki Linnakangas2011-03-30
| | | | | | close at this point. Fix PQsetnonblocking() comment. Fujii Masao
* Stamp copyrights for year 2011.Bruce Momjian2011-01-01
|
* Fix ill-advised placement of PGRES_COPY_BOTH enum value.Tom Lane2010-12-28
| | | | | It must be added at the end of the ExecStatusType enum to avoid ABI breakage compared to previous libpq versions. Noted by Magnus.
* Allow bidirectional copy messages in streaming replication mode.Robert Haas2010-12-11
| | | | Fujii Masao. Review by Alvaro Herrera, Tom Lane, and myself.
* Remove cvs keywords from all files.Magnus Hagander2010-09-20
|
* pgindent run for 9.0Bruce Momjian2010-02-26
|
* Stamp HEAD as 9.0devel, and update various places that were referring to 8.5Tom Lane2010-02-17
| | | | (hope I got 'em all). Per discussion, this release will be 9.0 not 8.5.
* Have SELECT and CREATE TABLE AS queries return a row count. While thisBruce Momjian2010-02-16
| | | | | | | is invisible in psql, other interfaces, like libpq, make this value visible. Boszormenyi Zoltan
* Fix unsafe loop test, and declare as_ident as bool rather than int.Robert Haas2010-01-21
|
* Add new escaping functions PQescapeLiteral and PQescapeIdentifier.Robert Haas2010-01-21
| | | | | | | | | | PQescapeLiteral is similar to PQescapeStringConn, but it relieves the caller of the need to know how large the output buffer should be, and it provides the appropriate quoting (in addition to escaping special characers within the string). PQescapeIdentifier provides similar functionality for escaping identifiers. Per recent discussion with Tom Lane.
* Update copyright for the year 2010.Bruce Momjian2010-01-02
|
* Teach PQescapeByteaConn() to use hex format when the target connection isTom Lane2009-08-04
| | | | to a server >= 8.5. Per my proposal in discussion of hex-format patch.