aboutsummaryrefslogtreecommitdiff
path: root/src/fe_utils/string_utils.c
Commit message (Collapse)AuthorAge
* In fmtIdEnc(), handle failure of enlargePQExpBuffer().Tom Lane2025-02-16
| | | | | | | | | | | | | | | | | | Coverity complained that we weren't doing that, and it's right. This fix just makes fmtIdEnc() honor the general convention that OOM causes a PQExpBuffer to become marked "broken", without any immediate error. In the pretty-unlikely case that we actually did hit OOM here, the end result would be to return an empty string to the caller, probably resulting in invalid SQL syntax in an issued command (if nothing else went wrong, which is even more unlikely). It's tempting to throw an "out of memory" error if the buffer becomes broken, but there's not a lot of point in doing that only here and not in hundreds of other PQExpBuffer-using places in pg_dump and similar callers. The whole issue could do with some non-time-crunched redesign, perhaps. This is a followup to the fixes for CVE-2025-1094, and should be included if cherry-picking those fixes.
* Make escaping functions retain trailing bytes of an invalid character.Tom Lane2025-02-15
| | | | | | | | | | | | | | | | | | | | | | Instead of dropping the trailing byte(s) of an invalid or incomplete multibyte character, replace only the first byte with a known-invalid sequence, and process the rest normally. This seems less likely to confuse incautious callers than the behavior adopted in 5dc1e42b4. While we're at it, adjust PQescapeStringInternal to produce at most one bleat about invalid multibyte characters per string. This matches the behavior of PQescapeInternal, and avoids the risk of producing tons of repetitive junk if a long string is simply given in the wrong encoding. This is a followup to the fixes for CVE-2025-1094, and should be included if cherry-picking those fixes. Author: Andres Freund <andres@anarazel.de> Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us> Reported-by: Jeff Davis <pgsql@j-davis.com> Discussion: https://postgr.es/m/20250215012712.45@rfd.leadboat.com Backpatch-through: 13
* Adapt appendPsqlMetaConnect() to the new fmtId() encoding expectations.Tom Lane2025-02-10
| | | | | | | | | | | | | | | | | We need to tell fmtId() what encoding to assume, but this function doesn't know that. Fortunately we can fix that without changing the function's API, because we can just use SQL_ASCII. That's because database names in connection requests are effectively binary not text: no encoding-aware processing will happen on them. This fixes XversionUpgrade failures seen in the buildfarm. The alternative of having pg_upgrade use setFmtEncoding() is unappetizing, given that it's connecting to multiple databases that may have different encodings. Andres Freund, Noah Misch, Tom Lane Security: CVE-2025-1094
* Fix handling of invalidly encoded data in escaping functionsAndres Freund2025-02-10
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously invalidly encoded input to various escaping functions could lead to the escaped string getting incorrectly parsed by psql. To be safe, escaping functions need to ensure that neither invalid nor incomplete multi-byte characters can be used to "escape" from being quoted. Functions which can report errors now return an error in more cases than before. Functions that cannot report errors now replace invalid input bytes with a byte sequence that cannot be used to escape the quotes and that is guaranteed to error out when a query is sent to the server. The following functions are fixed by this commit: - PQescapeLiteral() - PQescapeIdentifier() - PQescapeString() - PQescapeStringConn() - fmtId() - appendStringLiteral() Reported-by: Stephen Fewer <stephen_fewer@rapid7.com> Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Backpatch-through: 13 Security: CVE-2025-1094
* Specify the encoding of input to fmtId()Andres Freund2025-02-10
| | | | | | | | | | | | | | | | | | This commit adds fmtIdEnc() and fmtQualifiedIdEnc(), which allow to specify the encoding as an explicit argument. Additionally setFmtEncoding() is provided, which defines the encoding when no explicit encoding is provided, to avoid breaking all code using fmtId(). All users of fmtId()/fmtQualifiedId() are either converted to the explicit version or a call to setFmtEncoding() has been added. This commit does not yet utilize the now well-defined encoding, that will happen in a subsequent commit. Reviewed-by: Noah Misch <noah@leadboat.com> Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Backpatch-through: 13 Security: CVE-2025-1094
* Update copyright for 2025Bruce Momjian2025-01-01
| | | | Backpatch-through: 13
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Handle \v as a whitespace character in parsersMichael Paquier2023-07-06
| | | | | | | | | | | | | | | | | | | | | This commit comes as a continuation of the discussion that has led to d522b05, as \v was handled inconsistently when parsing array values or anything going through the parsers, and changing a parser behavior in stable branches is a scary thing to do. The parsing of array values now uses the more central scanner_isspace() and array_isspace() is removed. As pointing out by Peter Eisentraut, fix a confusing reference to horizontal space in the parsers with the term "horiz_space". \f was included in this set since 3cfdd8f from 2000, but it is not horizontal. "horiz_space" is renamed to "non_newline_space", to refer to all whitespace characters except newlines. The changes impact the parsers for the backend, psql, seg, cube, ecpg and replication commands. Note that JSON should not escape \v, as per RFC 7159, so these are not touched. Reviewed-by: Peter Eisentraut, Tom Lane Discussion: https://postgr.es/m/ZJKcjNwWHHvw9ksQ@paquier.xyz
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Fix minor memory leaks in psql's tab completion.Tom Lane2022-07-22
| | | | | | Tang Haiying and Tom Lane Discussion: https://postgr.es/m/OS0PR01MB6113EA19F05E217C823B4CCAFB909@OS0PR01MB6113.jpnprd01.prod.outlook.com
* Remove redundant null pointer checks before free()Peter Eisentraut2022-07-03
| | | | | | | | | | Per applicable standards, free() with a null pointer is a no-op. Systems that don't observe that are ancient and no longer relevant. Some PostgreSQL code already required this behavior, so this change does not introduce any new requirements, just makes the code more consistent. Discussion: https://www.postgresql.org/message-id/flat/dac5d2d0-98f5-94d9-8e69-46da2413593d%40enterprisedb.com
* Allow db.schema.table patterns, but complain about random garbage.Robert Haas2022-04-20
| | | | | | | | | | | | | | | | | | | | | | | | | | psql, pg_dump, and pg_amcheck share code to process object name patterns like 'foo*.bar*' to match all tables with names starting in 'bar' that are in schemas starting with 'foo'. Before v14, any number of extra name parts were silently ignored, so a command line '\d foo.bar.baz.bletch.quux' was interpreted as '\d bletch.quux'. In v14, as a result of commit 2c8726c4b0a496608919d1f78a5abc8c9b6e0868, we instead treated this as a request for table quux in a schema named 'foo.bar.baz.bletch'. That caused problems for people like Justin Pryzby who were accustomed to copying strings of the form db.schema.table from messages generated by PostgreSQL itself and using them as arguments to \d. Accordingly, revise things so that if an object name pattern contains more parts than we're expecting, we throw an error, unless there's exactly one extra part and it matches the current database name. That way, thisdb.myschema.mytable is accepted as meaning just myschema.mytable, but otherdb.myschema.mytable is an error, and so is some.random.garbage.myschema.mytable. Mark Dilger, per report from Justin Pryzby and discussion among various people. Discussion: https://www.postgresql.org/message-id/20211013165426.GD27491%40telsasoft.com
* psql: add \dconfig command to show server's configuration parameters.Tom Lane2022-04-07
| | | | | | | | | | | | | | | | | | | | | | Plain \dconfig is basically equivalent to SHOW except that you can give it a pattern with wildcards, either to match multiple GUCs or because you don't exactly remember the name you want. \dconfig+ adds type, context, and access-privilege information, mainly because every other kind of object privilege has a psql command to show it, so GUC privileges should too. (A form of this command was in some versions of the patch series leading up to commit a0ffa885e. We pulled it out then because of doubts that the design and code were up to snuff, but I think subsequent work has resolved that.) In passing, fix incorrect completion of GUC names in GRANT/REVOKE ON PARAMETER: a0ffa885e neglected to use the VERBATIM form of COMPLETE_WITH_QUERY, so it misbehaved for custom (qualified) GUC names. Mark Dilger and Tom Lane Discussion: https://postgr.es/m/3118455.1649267333@sss.pgh.pa.us
* Update copyright for 2022Bruce Momjian2022-01-07
| | | | Backpatch-through: 10
* Rethink pg_dump's handling of object ACLs.Tom Lane2021-12-06
| | | | | | | | | | | | | | | | Throw away most of the existing logic for this, as it was very inefficient thanks to expensive sub-selects executed to collect ACL data that we very possibly would have no interest in dumping. Reduce the ACL handling in the initial per-object-type queries to be just collection of the catalog ACL fields, as it was originally. Fetch pg_init_privs data separately in a single scan of that catalog, and do the merging calculations on the client side. Remove the separate code path used for pre-9.6 source servers; there is no good reason to treat them differently from newer servers that happen to have empty pg_init_privs. Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
* Fix incautious handling of possibly-miscoded strings in client code.Tom Lane2021-06-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An incorrectly-encoded multibyte character near the end of a string could cause various processing loops to run past the string's terminating NUL, with results ranging from no detectable issue to a program crash, depending on what happens to be in the following memory. This isn't an issue in the server, because we take care to verify the encoding of strings before doing any interesting processing on them. However, that lack of care leaked into client-side code which shouldn't assume that anyone has validated the encoding of its input. Although this is certainly a bug worth fixing, the PG security team elected not to regard it as a security issue, primarily because any untrusted text should be sanitized by PQescapeLiteral or the like before being incorporated into a SQL or psql command. (If an app fails to do so, the same technique can be used to cause SQL injection, with probably much more dire consequences than a mere client-program crash.) Those functions were already made proof against this class of problem, cf CVE-2006-2313. To fix, invent PQmblenBounded() which is like PQmblen() except it won't return more than the number of bytes remaining in the string. In HEAD we can make this a new libpq function, as PQmblen() is. It seems imprudent to change libpq's API in stable branches though, so in the back branches define PQmblenBounded as a macro in the files that need it. (Note that just changing PQmblen's behavior would not be a good idea; notably, it would completely break the escaping functions' defense against this exact problem. So we just want a version for those callers that don't have any better way of handling this issue.) Per private report from houjingyi. Back-patch to all supported branches.
* Allow psql's \df and \do commands to specify argument types.Tom Lane2021-04-07
| | | | | | | | | | | | | | | | | | When dealing with overloaded function or operator names, having to look through a long list of matches is tedious. Let's extend these commands to allow specification of (input) argument types to let such results be trimmed down. Each additional argument is treated the same as the pattern argument of \dT and matched against the appropriate argument's type name. While at it, fix \dT (and these new options) to recognize the usual notation of "foo[]" for "the array type over foo", and to handle the special abbreviations allowed by the backend grammar, such as "int" for "integer". Greg Sabino Mullane, revised rather significantly by me Discussion: https://postgr.es/m/CAKAnmmLF9Hhu02N+s7uAyLc5J1xZReg72HQUoiKhNiJV3_jACQ@mail.gmail.com
* Factor pattern-construction logic out of processSQLNamePattern.Robert Haas2021-02-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logic for converting the shell-glob-like syntax supported by utilities like psql and pg_dump to regular expression is extracted into a new function patternToSQLRegex. The existing function processSQLNamePattern now uses this function as a subroutine. patternToSQLRegex is a little more general than what is required by processSQLNamePattern. That function is only interested in patterns that can have up to 2 parts, a schema and a relation; but patternToSQLRegex can limit the maximum number of parts to between 1 and 3, so that patterns can look like either "database.schema.relation", "schema.relation", or "relation" depending on how it's invoked and what the user specifies. processSQLNamePattern only passes two buffers, so works exactly the same as before, always interpreting the pattern as either a "schema.relation" pattern or a "relation" pattern. But, future callers can use this function in other ways. Mark Dilger, reviewed by me. The larger patch series of which this is a part has also had review from Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul Sul, but I don't know whether any of them have reviewed this bit specifically. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
* Update copyright for 2021Bruce Momjian2021-01-02
| | | | Backpatch-through: 9.5
* Update copyrights for 2020Bruce Momjian2020-01-01
| | | | Backpatch-through: update all files in master, backpatch legal files through 9.4
* Make the order of the header file includes consistent in non-backend modules.Amit Kapila2019-10-25
| | | | | | | | | | | | Similar to commit 7e735035f2, this commit makes the order of header file inclusion consistent for non-backend modules. In passing, fix the case where we were using angle brackets (<>) for the local module includes instead of quotes (""). Author: Vignesh C Reviewed-by: Amit Kapila Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
* Use appendStringInfoString and appendPQExpBufferStr where possibleDavid Rowley2019-07-04
| | | | | | | | | | This changes various places where appendPQExpBuffer was used in places where it was possible to use appendPQExpBufferStr, and likewise for appendStringInfo and appendStringInfoString. This is really just a stylistic improvement, but there are also small performance gains to be had from doing this. Discussion: http://postgr.es/m/CAKJS1f9P=M-3ULmPvr8iCno8yvfDViHibJjpriHU8+SXUgeZ=w@mail.gmail.com
* Update stale comments, and fix comment typos.Noah Misch2019-06-08
|
* Ensure consistent name matching behavior in processSQLNamePattern().Tom Lane2019-04-05
| | | | | | | | | | | | | | | | | | | | Prior to v12, if you used a collation-sensitive regex feature in a pattern handled by processSQLNamePattern() (for instance, \d '\\w+' in psql), the behavior you got matched the database's default collation. Since commit 586b98fdf you'd usually get C-collation behavior, because the catalog "name"-type columns are now marked as COLLATE "C". Add explicit COLLATE specifications to restore the prior behavior. (Note for whoever writes the v12 release notes: the need for this shows that while 586b98fdf preserved pre-v12 behavior of "name" columns for simple comparison operators, it changed the behavior of regex operators on those columns. Although this patch fixes it for pattern matches generated by our own tools, user-written queries will still be affected. So we'd better mention this issue as a compatibility item.) Daniel Vérité Discussion: https://postgr.es/m/701e51f0-0ec0-4e70-a365-1958d66dd8d2@manitou-mail.org
* Replace the data structure used for keyword lookup.Tom Lane2019-01-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, ScanKeywordLookup was passed an array of string pointers. This had some performance deficiencies: the strings themselves might be scattered all over the place depending on the compiler (and some quick checking shows that at least with gcc-on-Linux, they indeed weren't reliably close together). That led to very cache-unfriendly behavior as the binary search touched strings in many different pages. Also, depending on the platform, the string pointers might need to be adjusted at program start, so that they couldn't be simple constant data. And the ScanKeyword struct had been designed with an eye to 32-bit machines originally; on 64-bit it requires 16 bytes per keyword, making it even more cache-unfriendly. Redesign so that the keyword strings themselves are allocated consecutively (as part of one big char-string constant), thereby eliminating the touch-lots-of-unrelated-pages syndrome. And get rid of the ScanKeyword array in favor of three separate arrays: uint16 offsets into the keyword array, uint16 token codes, and uint8 keyword categories. That reduces the overhead per keyword to 5 bytes instead of 16 (even less in programs that only need one of the token codes and categories); moreover, the binary search only touches the offsets array, further reducing its cache footprint. This also lets us put the token codes somewhere else than the keyword strings are, which avoids some unpleasant build dependencies. While we're at it, wrap the data used by ScanKeywordLookup into a struct that can be treated as an opaque type by most callers. That doesn't change things much right now, but it will make it less painful to switch to a hash-based lookup method, as is being discussed in the mailing list thread. Most of the change here is associated with adding a generator script that can build the new data structure from the same list-of-PG_KEYWORD header representation we used before. The PG_KEYWORD lists that plpgsql and ecpg used to embed in their scanner .c files have to be moved into headers, and the Makefiles have to be taught to invoke the generator script. This work is also necessary if we're to consider hash-based lookup, since the generator script is what would be responsible for constructing a hash table. Aside from saving a few kilobytes in each program that includes the keyword table, this seems to speed up raw parsing (flex+bison) by a few percent. So it's worth doing even as it stands, though we think we can gain even more with a follow-on patch to switch to hash-based lookup. John Naylor, with further hacking by me Discussion: https://postgr.es/m/CAJVSVGXdFVU2sgym89XPL=Lv1zOS5=EHHQ8XWNzFL=mTXkKMLw@mail.gmail.com
* Update copyright for 2019Bruce Momjian2019-01-02
| | | | Backpatch-through: certain files through 9.4
* Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands.Tom Lane2018-08-17
| | | | | | | | | | | | | | | | | | | | | | | | | | Previously, this code blindly followed the common coding pattern of passing PQserverVersion(AH->connection) as the server-version parameter of fmtQualifiedId. That works as long as we have a connection; but in pg_restore with text output, we don't. Instead we got a zero from PQserverVersion, which fmtQualifiedId interpreted as "server is too old to have schemas", and so the name went unqualified. That still accidentally managed to work in many cases, which is probably why this ancient bug went undetected for so long. It only became obvious in the wake of the changes to force dump/restore to execute with restricted search_path. In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server- version behavioral dependency, and just making it schema-qualify all the time. We no longer support pg_dump from servers old enough to need the ability to omit schema name, let alone restoring to them. (Also, the few callers outside pg_dump already didn't work with pre-schema servers.) In older branches, that's not an acceptable solution, so instead just tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify its output regardless of server version. Per bug #15338 from Oleg somebody. Back-patch to all supported branches. Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org
* Empty search_path in Autovacuum and non-psql/pgbench clients.Noah Misch2018-02-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the client programs behave as documented regardless of the connect-time search_path and regardless of user-created objects. Today, a malicious user with CREATE permission on a search_path schema can take control of certain of these clients' queries and invoke arbitrary SQL functions under the client identity, often a superuser. This is exploitable in the default configuration, where all users have CREATE privilege on schema "public". This changes behavior of user-defined code stored in the database, like pg_index.indexprs and pg_extension_config_dump(). If they reach code bearing unqualified names, "does not exist" or "no schema has been selected to create in" errors might appear. Users may fix such errors by schema-qualifying affected names. After upgrading, consider watching server logs for these errors. The --table arguments of src/bin/scripts clients have been lax; for example, "vacuumdb -Zt pg_am\;CHECKPOINT" performed a checkpoint. That now fails, but for now, "vacuumdb -Zt 'pg_am(amname);CHECKPOINT'" still performs a checkpoint. Back-patch to 9.3 (all supported versions). Reviewed by Tom Lane, though this fix strategy was not his first choice. Reported by Arseniy Sharoglazov. Security: CVE-2018-1058
* Update copyright for 2018Bruce Momjian2018-01-02
| | | | Backpatch-through: certain files through 9.3
* Phase 2 of pgindent updates.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Initial pgindent run with pg_bsd_indent version 2.0.Tom Lane2017-06-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new indent version includes numerous fixes thanks to Piotr Stefaniak. The main changes visible in this commit are: * Nicer formatting of function-pointer declarations. * No longer unexpectedly removes spaces in expressions using casts, sizeof, or offsetof. * No longer wants to add a space in "struct structname *varname", as well as some similar cases for const- or volatile-qualified pointers. * Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely. * Fixes bug where comments following declarations were sometimes placed with no space separating them from the code. * Fixes some odd decisions for comments following case labels. * Fixes some cases where comments following code were indented to less than the expected column 33. On the less good side, it now tends to put more whitespace around typedef names that are not listed in typedefs.list. This might encourage us to put more effort into typedef name collection; it's not really a bug in indent itself. There are more changes coming after this round, having to do with comment indentation and alignment of lines appearing within parentheses. I wanted to limit the size of the diffs to something that could be reviewed without one's eyes completely glazing over, so it seemed better to split up the changes as much as practical. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
* Allow psql variable substitution to occur in backtick command strings.Tom Lane2017-04-01
| | | | | | | | | | | | | | | | | | | | | | | Previously, text between backquotes in a psql metacommand's arguments was always passed to the shell literally. That considerably hobbles the usefulness of the feature for scripting, so we'd foreseen for a long time that we'd someday want to allow substitution of psql variables into the shell command. IMO the addition of \if metacommands has brought us to that point, since \if can greatly benefit from some sort of client-side expression evaluation capability, and psql itself is not going to grow any such thing in time for v10. Hence, this patch. It allows :VARIABLE to be replaced by the exact contents of the named variable, while :'VARIABLE' is replaced by the variable's contents suitably quoted to become a single shell-command argument. (The quoting rules for that are different from those for SQL literals, so this is a bit of an abuse of the :'VARIABLE' notation, but I doubt anyone will be confused.) As with other situations in psql, no substitution occurs if the word following a colon is not a known variable name. That limits the risk of compatibility problems for existing psql scripts; but the risk isn't zero, so this needs to be called out in the v10 release notes. Discussion: https://postgr.es/m/9561.1490895211@sss.pgh.pa.us
* Spelling fixes in code commentsPeter Eisentraut2017-03-14
| | | | From: Josh Soref <jsoref@gmail.com>
* Update copyright via script for 2017Bruce Momjian2017-01-03
|
* Teach appendShellString() to not quote strings containing "-".Tom Lane2016-09-06
| | | | | | | Brain fade in commit a00c58314: I was thinking that a string starting with "-" could be taken as a switch depending on command line syntax. That's true, but having appendShellString() quote it will not help, so we may as well not do so. Per complaint from Peter Eisentraut.
* Make initdb's suggested "pg_ctl start" command line more reliable.Tom Lane2016-08-20
| | | | | | | | | | | | | | | | The original coding here was not nearly careful enough about quoting special characters, and it didn't get corner cases right for constructing the pg_ctl path either. Use join_path_components() and appendShellString() to do it honestly, so that the string will more likely work if blindly copied-and-pasted. While at it, teach appendShellString() not to quote strings that clearly don't need it, so that the output from initdb doesn't become uglier than it was before in typical cases where quoting is not needed. Ryan Murphy, reviewed by Michael Paquier and myself Discussion: <CAHeEsBeAe1FeBypT3E8R1ZVZU0e8xv3A-7BHg6bEOi=jZny2Uw@mail.gmail.com>
* Fix assorted places in psql to print version numbers >= 10 in new style.Tom Lane2016-08-16
| | | | | | | | | | | | | | | | | | | This is somewhat cosmetic, since as long as you know what you are looking at, "10.0" is a serviceable substitute for "10". But there is a potential for confusion between version numbers with minor numbers and those without --- we don't want people asking "why is psql saying 10.0 when my server is 10.2". Therefore, back-patch as far as practical, which turns out to be 9.3. I could have redone the patch to use fprintf(stderr) in place of psql_error(), but it seems more work than is warranted for branches that will be EOL or nearly so by the time v10 comes out. Although only psql seems to contain any code that needs this, I chose to put the support function into fe_utils, since it seems likely we'll need it in other client programs in future. (In 9.3-9.5, use dumputils.c, the predecessor of fe_utils/string_utils.c.) In HEAD, also fix the backend code that whines about loadable-library version mismatch. I don't see much need to back-patch that.
* Obstruct shell, SQL, and conninfo injection via database and role names.Noah Misch2016-08-08
| | | | | | | | | | | | | | | | Due to simplistic quoting and confusion of database names with conninfo strings, roles with the CREATEDB or CREATEROLE option could escalate to superuser privileges when a superuser next ran certain maintenance commands. The new coding rule for PQconnectdbParams() calls, documented at conninfo_array_parse(), is to pass expand_dbname=true and wrap literal database names in a trivial connection string. Escape zero-length values in appendConnStrVal(). Back-patch to 9.1 (all supported versions). Nathan Bossart, Michael Paquier, and Noah Misch. Reviewed by Peter Eisentraut. Reported by Nathan Bossart. Security: CVE-2016-5424
* Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.Noah Misch2016-08-08
| | | | | | | | | | Rename these newly-extern functions with terms more typical of their new neighbors. No functional changes; a subsequent commit will use them in more places. Back-patch to 9.1 (all supported versions). Back branches lack src/fe_utils, so instead rename the functions in place; the subsequent commit will copy them into the other programs using them. Security: CVE-2016-5424
* Fix comment.Tom Lane2016-05-15
| | | | | Reference to getThreadLocalPQExpBuffer here seems inappropriate, since we aren't necessarily using that instantiation of getLocalPQExpBuffer.
* Move and rename fmtReloptionsArray().Dean Rasheed2016-05-06
| | | | | | | | | | | | Move fmtReloptionsArray() from pg_dump.c to string_utils.c so that it is available to other frontend code. In particular psql's \ev and \sv commands need it to handle view reloptions. Also rename the function to appendReloptionsArray(), which is a more accurate description of what it does. Author: Dean Rasheed Reviewed-by: Peter Eisentraut Discussion: http://www.postgresql.org/message-id/CAEZATCWZjCgKRyM-agE0p8ax15j9uyQoF=qew7D2xB6cF76T8A@mail.gmail.com
* Create src/fe_utils/, and move stuff into there from pg_dump's dumputils.Tom Lane2016-03-24
Per discussion, we want to create a static library and put the stuff into it that until now has been shared across src/bin/ directories by ad-hoc methods like symlinking a source file. This commit creates the library and populates it with a couple of files that contain the widely-useful portions of pg_dump's dumputils.c file. dumputils.c survives, because it has some stuff that didn't seem appropriate for fe_utils, but it's significantly smaller and is no longer referenced from any other directory. Follow-on patches will move more stuff into fe_utils. The Mkvcbuild.pm hacking here is just a best guess; we'll see how the buildfarm likes it.