| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
A 'void *' argument suggests that the caller might pass an arbitrary
struct, which is appropriate for functions like libc's read/write, or
pq_sendbytes(). 'uint8 *' is more appropriate for byte arrays that
have no structure, like the cancellation keys or SCRAM tokens. Some
places used 'char *', but 'uint8 *' is better because 'char *' is
commonly used for null-terminated strings. Change code around SCRAM,
MD5 authentication, and cancellation key handling to follow these
conventions.
Discussion: https://www.postgresql.org/message-id/61be9e31-7b7d-49d5-bc11-721800d89d64@eisentraut.org
|
|
|
|
|
|
|
|
| |
The documented max length of a cancel key is 256 bytes, so it fits in
uint8. It nevertheless seems weird to not just use 'int', like in
commit 0f1433f053 for the backend.
Discussion: https://www.postgresql.org/message-id/61be9e31-7b7d-49d5-bc11-721800d89d64%40eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new connection parameter which instructs libpq to
write out keymaterial clientside into a file in order to make
connection debugging with Wireshark and similar tools possible.
The file format used is the standardized NSS format.
Author: Abhishek Chanda <abhishek.becs@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/CAKiP-K85C8uQbzXKWf5wHQPkuygGUGcufke713iHmYWOe9q2dA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the cancel request key is a 32-bit token, which isn't very
much entropy. If you want to cancel another session's query, you can
brute-force it. In most environments, an unauthorized cancellation of
a query isn't very serious, but it nevertheless would be nice to have
more protection from it. Hence make the key longer, to make it harder
to guess.
The longer cancellation keys are generated when using the new protocol
version 3.2. For connections using version 3.0, short 4-bytes keys are
still used.
The new longer key length is not hardcoded in the protocol anymore,
the client is expected to deal with variable length keys, up to 256
bytes. This flexibility allows e.g. a connection pooler to add more
information to the cancel key, which might be useful for finding the
connection.
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Discussion: https://www.postgresql.org/message-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All supported version of the PostgreSQL server send the
NegotiateProtocolVersion message when an unsupported minor protocol
version is requested by a client. But many other applications that
implement the PostgreSQL protocol (connection poolers, or other
databases) do not, and the same is true for PostgreSQL server versions
older than 9.3. Connecting to such other applications thus fails if a
client requests a protocol version different than 3.0.
This patch adds a max_protocol_version connection option to libpq that
specifies the protocol version that libpq should request from the
server. Currently only 3.0 is supported, but that will change in a
future commit that bumps the protocol version. Even after that version
bump the default will likely stay 3.0 for the time being. Once more of
the ecosystem supports the NegotiateProtocolVersion message we might
want to change the default to the latest minor version.
This also adds the similar min_protocol_version connection option, to
allow the client to specify that connecting should fail if a lower
protocol version is attempted by the server. This can be used to
ensure that certain protocol features are used, which can be
particularly useful if those features impact security.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Discussion: https://www.postgresql.org/message-id/CAGECzQTfc_O%2BHXqAo5_-xG4r3EFVsTefUeQzSvhEyyLDba-O9w@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/CAGECzQRbAGqJnnJJxTdKewTsNOovUt4bsx3NFfofz3m2j-t7tA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously libpq would always error out if the server sends a
NegotiateProtocolVersion message. This was fine because libpq only
supported a single protocol version and did not support any protocol
parameters. But in the upcoming commits, we will introduce a new
protocol version and the NegotiateProtocolVersion message starts to
actually be used.
This patch modifies the client side checks to allow a range of
supported protocol versions, instead of only allowing the exact
version that was requested. Currently this "range" only contains the
3.0 version, but in a future commit we'll change this.
Also clarify the error messages, making them suitable for the world
where libpq will support multiple protocol versions and protocol
extensions.
Note that until the later commits that introduce new protocol version,
this change does not have any behavioural effect, because libpq will
only request version 3.0 and will never send protocol parameters, and
therefore will never receive a NegotiateProtocolVersion message from
the server.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Discussion: https://www.postgresql.org/message-id/CAGECzQTfc_O%2BHXqAo5_-xG4r3EFVsTefUeQzSvhEyyLDba-O9w@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/CAGECzQRbAGqJnnJJxTdKewTsNOovUt4bsx3NFfofz3m2j-t7tA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to a conflict in macro names on Windows between <wincrypt.h>
and <openssl/ssl.h> these headers need to be included using a
predictable pattern with an undef to handle that. The GSSAPI
header <gssapi.h> does include <wincrypt.h> which cause problems
with compiling PostgreSQL using MSVC when OpenSSL and GSSAPI are
both enabled in the tree. Rather than fixing piecemeal for each
file including gssapi headers, move the the includes and undef
to a new file which should be used to centralize the logic.
This patch is a reworked version of a patch by Imran Zaheer
proposed earlier in the thread. Once this has proven effective
in master we should look at backporting this as the problem
exist at least since v16.
Author: Daniel Gustafsson <daniel@yesql.se>
Co-authored-by: Imran Zaheer <imran.zhir@gmail.com>
Reported-by: Dave Page <dpage@pgadmin.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/20240708173204.3f3xjilglx5wuzx6@awork3.anarazel.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device
Authorization Grants, RFC 8628. In order to use this there is a
new pg_hba auth method called oauth. When speaking to a OAuth-
enabled server, it looks a bit like this:
$ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG
Device authorization is currently the only supported flow so the
OAuth issuer must support that in order for users to authenticate.
Third-party clients may however extend this and provide their own
flows. The built-in device authorization flow is currently not
supported on Windows.
In order for validation to happen server side a new framework for
plugging in OAuth validation modules is added. As validation is
implementation specific, with no default specified in the standard,
PostgreSQL does not ship with one built-in. Each pg_hba entry can
specify a specific validator or be left blank for the validator
installed as default.
This adds a requirement on libcurl for the client side support,
which is optional to build, but the server side has no additional
build requirements. In order to run the tests, Python is required
as this adds a https server written in Python. Tests are gated
behind PG_TEST_EXTRA as they open ports.
This patch has been a multi-year project with many contributors
involved with reviews and in-depth discussions: Michael Paquier,
Heikki Linnakangas, Zhihong Yu, Mahendrakar Srinivasarao, Andrey
Chudnovsky and Stephen Frost to name a few. While Jacob Champion
is the main author there have been some levels of hacking by others.
Daniel Gustafsson contributed the validation module and various bits
and pieces; Thomas Munro wrote the client side support for kqueue.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the ability for a SASL mechanism to signal PQconnectPoll()
that some arbitrary work, external to the Postgres connection, is
required for authentication to continue. There is no consumer for
this capability as part of this commit, it is infrastructure which
is required for future work on supporting the OAUTHBEARER mechanism.
To ensure that threads are not blocked waiting for the SASL mechanism
to make long-running calls, the mechanism communicates with the top-
level client via the "altsock": a file or socket descriptor, opaque to
this layer of libpq, which is signaled when work is ready to be done
again. The altsock temporarily replaces the regular connection
descriptor, so existing PQsocket() clients should continue to operate
correctly using their existing polling implementations.
For a mechanism to use this it should set an authentication callback,
conn->async_auth(), and a cleanup callback, conn->cleanup_async_auth(),
and return SASL_ASYNC during the exchange. It should then assign
conn->altsock during the first call to async_auth(). When the cleanup
callback is called, either because authentication has succeeded or
because the connection is being dropped, the altsock must be released
and disconnected from the PGconn object.
This was extracted from the larger OAUTHBEARER patchset which has
been developed, and reviewed by many, over several years and it is
thus likely that some reviewer credit of much earlier versions has
been accidentally omitted.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/CAOYmi+kJqzo6XsR9TEhvVfeVNQ-TyFM5LATypm9yoQVYk=4Wrw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to this patch, the require_auth implementation assumed that
the AuthenticationSASL protocol message was using SCRAM-SHA-256.
In preparation for future SASL mechanisms, like OAUTHBEARER, split
the implementation into two tiers: the first checks the acceptable
AUTH_REQ_* codes, and the second checks acceptable mechanisms if
AUTH_REQ_SASL et.al are permitted.
conn->allowed_sasl_mechs contains a list of pointers to acceptable
mechanisms, and pg_SASL_init() will bail if the selected mechanism
isn't contained in this array.
Since there's only one mechansism supported right now, one branch
of the second tier cannot be exercised yet and is protected by an
Assert(false) call. This assertion will need to be removed when
the next mechanism is added.
This patch is extracted from a larger body of work aimed at adding
support for OAUTHBEARER in libpq.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CAOYmi+kJqzo6XsR9TEhvVfeVNQ-TyFM5LATypm9yoQVYk=4Wrw@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add various widely useful "IWYU pragma" annotations, such as
- Common header files such as c.h, postgres.h should be "always_keep".
- System headers included in c.h, postgres.h etc. should be considered
"export".
- Some portability headers such as getopt_long.h should be
"always_keep", so they are not considered superfluous on some
platforms.
- Certain system headers included from portability headers should be
considered "export" because the purpose of the portability header is
to wrap them.
- Superfluous includes marked as "for backward compatibility" get a
formal IWYU annotation.
- Generated header included in utils/syscache.h is marked exported.
This is a very commonly used include and this avoids lots of
complaints.
Discussion: https://www.postgresql.org/message-id/flat/9395d484-eff4-47c2-b276-8e228526c8ae@eisentraut.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This enables SCRAM authentication for postgres_fdw when connecting to
a foreign server without having to store a plain-text password on user
mapping options.
This is done by saving the SCRAM ClientKey and ServeryKey from the
client authentication and using those instead of the plain-text
password for the server-side SCRAM exchange. The new foreign-server
or user-mapping option "use_scram_passthrough" enables this.
Co-authored-by: Matheus Alcantara <mths.dev@pm.me>
Co-authored-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/27b29a35-9b96-46a9-bc1a-914140869dac@gmail.com
|
|
|
|
| |
Backpatch-through: 13
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds one field to PGconn for the database service name (if
any), with PQservice() as routine to retrieve it. Like the other
routines of this area, NULL is returned as result if the connection is
NULL.
A follow-up patch will make use of this feature to be able to display
the service name in the psql prompt.
Author: Michael Banck
Reviewed-by: Greg Sabino Mullane
Discusion: https://postgr.es/m/6723c612.050a0220.1567f4.b94a@mx.google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PostgreSQL has for a long time mixed two BIO implementations, which can
lead to subtle bugs and inconsistencies. This cleans up our BIO by just
just setting up the methods we need. This patch does not introduce any
functionality changes.
The following methods are no longer defined due to not being needed:
- gets: Not used by libssl
- puts: Not used by libssl
- create: Sets up state not used by libpq
- destroy: Not used since libpq use BIO_NOCLOSE, if it was used it close
the socket from underneath libpq
- callback_ctrl: Not implemented by sockets
The following methods are defined for our BIO:
- read: Used for reading arbitrary length data from the BIO. No change
in functionality from the previous implementation.
- write: Used for writing arbitrary length data to the BIO. No change
in functionality from the previous implementation.
- ctrl: Used for processing ctrl messages in the BIO (similar to ioctl).
The only ctrl message which matters is BIO_CTRL_FLUSH used for
writing out buffered data (or signal EOF and that no more data
will be written). BIO_CTRL_FLUSH is mandatory to implement and
is implemented as a no-op since there is no intermediate buffer
to flush.
BIO_CTRL_EOF is the out-of-band method for signalling EOF to
read_ex based BIO's. Our BIO is not read_ex based but someone
could accidentally call BIO_CTRL_EOF on us so implement mainly
for completeness sake.
As the implementation is no longer related to BIO_s_socket or calling
SSL_set_fd, methods have been renamed to reference the PGconn and Port
types instead.
This also reverts back to using BIO_set_data, with our fallback, as a small
optimization as BIO_set_app_data require the ex_data mechanism in OpenSSL.
Author: David Benjamin <davidben@google.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAF8qwaCZ97AZWXtg_y359SpOHe+HdJ+p0poLCpJYSUxL-8Eo8A@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OpenSSL 1.0.2 has been EOL from the upstream OpenSSL project for
some time, and is no longer the default OpenSSL version with any
vendor which package PostgreSQL. By retiring support for OpenSSL
1.0.2 we can remove a lot of no longer required complexity for
managing state within libcrypto which is now handled by OpenSSL.
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/ZG3JNursG69dz1lr@paquier.xyz
Discussion: https://postgr.es/m/CA+hUKGKh7QrYzu=8yWEUJvXtMVm_CNWH1L_TLWCbZMwbi1XP2Q@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Not all messages that libpq received from the server would be sent
through our message tracing logic. This commit tries to fix that by
introducing a new function pqParseDone which make it harder to forget
about doing so.
The messages that we now newly send through our tracing logic are:
- CopyData (received by COPY TO STDOUT)
- Authentication requests
- NegotiateProtocolVersion
- Some ErrorResponse messages during connection startup
- ReadyForQuery when received after a FunctionCall message
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQSoPHtZ4xe0raJ6FYSEiPPS+YWXBhOGo+Y1YecLgknF3g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Since these are single bytes instead of v2 or v3 messages they need
custom tracing logic. These "messages" don't even have official names
in the protocol specification, so I (Jelte) called them SSLResponse and
GSSENCResponse here.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQSoPHtZ4xe0raJ6FYSEiPPS+YWXBhOGo+Y1YecLgknF3g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If tracing was enabled during connection startup, these messages would
previously be listed in the trace output as something like this:
F 54 Unknown message: 70
mismatched message length: consumed 4, expected 54
With this commit their type and contents are now correctly listed:
F 36 StartupMessage 3 0 "user" "foo" "database" "alvherre"
F 54 SASLInitialResponse "SCRAM-SHA-256" 32 'n,,n=,r=nq5zEPR/VREHEpOAZzH8Rujm'
F 108 SASLResponse 'c=biws,r=nq5zEPR/VREHEpOAZzH8RujmVtWZDQ8glcrvy9OMNw7ZqFUn,p=BBwAKe0WjSvigB6RsmmArAC+hwucLeuwJrR5C/HQD5M='
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAGECzQSoPHtZ4xe0raJ6FYSEiPPS+YWXBhOGo+Y1YecLgknF3g@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit f5e4dedfa exposed libpq's internal function PQsocketPoll
without a lot of thought about whether that was an API we really
wanted to chisel in stone. The main problem with it is the use of
time_t to specify the timeout. While we do want an absolute time
so that a loop around PQsocketPoll doesn't have problems with
timeout slippage, time_t has only 1-second resolution. That's
already problematic for libpq's own internal usage --- for example,
pqConnectDBComplete has long had a kluge to treat "connect_timeout=1"
as 2 seconds so that it doesn't accidentally round to nearly zero.
And it's even less likely to be satisfactory for external callers.
Hence, let's change this while we still can.
The best idea seems to be to use an int64 count of microseconds since
the epoch --- basically the same thing as the backend's TimestampTz,
but let's use the standard Unix epoch (1970-01-01) since that's more
likely for clients to be easy to calculate. Millisecond resolution
would be plenty for foreseeable uses, but maybe the day will come that
we're glad we used microseconds.
Also, since time(2) isn't especially helpful for computing timeouts
defined this way, introduce a new function PQgetCurrentTimeUSec
to get the current time in this form.
Remove the hack in pqConnectDBComplete, so that "connect_timeout=1"
now means what you'd expect.
We can also remove the "#include <time.h>" that f5e4dedfa added to
libpq-fe.h, since there's no longer a need for time_t in that header.
It seems better for v17 not to enlarge libpq-fe.h's include footprint
from what it's historically been, anyway.
I also failed to resist the temptation to do some wordsmithing
on PQsocketPoll's documentation.
Patch by me, per complaint from Dominique Devienne.
Discussion: https://postgr.es/m/913559.1718055575@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were three problems with the sslnegotiation options:
1. The sslmode=prefer and sslnegotiation=requiredirect combination was
somewhat dangerous, as you might unintentionally fall back to
plaintext authentication when connecting to a pre-v17 server.
2. There was an asymmetry between 'postgres' and 'direct'
options. 'postgres' meant "try only traditional negotiation", while
'direct' meant "try direct first, and fall back to traditional
negotiation if it fails". That was apparent only if you knew that the
'requiredirect' mode also exists.
3. The "require" word in 'requiredirect' suggests that it's somehow
more strict or more secure, similar to sslmode. However, I don't
consider direct SSL connections to be a security feature.
To address these problems:
- Only allow sslnegotiation='direct' if sslmode='require' or
stronger. And for the record, Jacob and Robert felt that we should do
that (or have sslnegotiation='direct' imply sslmode='require') anyway,
regardless of the first issue.
- Remove the 'direct' mode that falls back to traditional negotiation,
and rename what was called 'requiredirect' to 'direct' instead. In
other words, there is no "try both methods" option anymore, 'postgres'
now means the traditional negotiation and 'direct' means a direct SSL
connection.
Reviewed-by: Jelte Fennema-Nio, Robert Haas, Jacob Champion
Discussion: https://www.postgresql.org/message-id/d3b1608a-a1b6-4eda-9ec5-ddb3e4375808%40iki.fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By skipping SSLRequest, you can eliminate one round-trip when
establishing a TLS connection. It is also more friendly to generic TLS
proxies that don't understand the PostgreSQL protocol.
This is disabled by default in libpq, because the direct TLS handshake
will fail with old server versions. It can be enabled with the
sslnegotation=direct option. It will still fall back to the negotiated
TLS handshake if the server rejects the direct attempt, either because
it is an older version or the server doesn't support TLS at all, but
the fallback can be disabled with the sslnegotiation=requiredirect
option.
Author: Greg Stark, Heikki Linnakangas
Reviewed-by: Matthias van de Meent, Jacob Champion
|
|
|
|
|
|
|
|
| |
This fixes the few corner cases noted in commit 705843d294, as shown
by the changes in the test.
Author: Heikki Linnakangas, Matthias van de Meent
Reviewed-by: Jacob Champion
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch generalizes libpq's existing single-row mode to allow
individual partial-result PGresults to contain up to N rows, rather
than always one row. This reduces malloc overhead compared to plain
single-row mode, and it is very useful for psql's FETCH_COUNT feature,
since otherwise we'd have to add code (and cycles) to either merge
single-row PGresults into a bigger one or teach psql's
results-printing logic to accept arrays of PGresults.
To avoid API breakage, PQsetSingleRowMode() remains the same, and we
add a new function PQsetChunkedRowsMode() to invoke the more general
case. Also, PGresults obtained the old way continue to carry the
PGRES_SINGLE_TUPLE status code, while if PQsetChunkedRowsMode() is
used then their status code is PGRES_TUPLES_CHUNK. The underlying
logic is the same either way, though.
Daniel Vérité, reviewed by Laurenz Albe and myself (and whacked
around a bit by me, so any remaining bugs are my fault)
Discussion: https://postgr.es/m/CAKZiRmxsVTkO928CM+-ADvsMyePmU3L9DQCa9NwqjvLPcEe5QA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
|
|
|
|
|
|
|
|
| |
No other files need to access this struct, so there is no need to have
its definition in a header file.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/202403061822.spfzqbf7dsgg@alvherre.pgsql
|
|
|
|
|
|
|
|
| |
This is in preparation of a follow up commit that starts using these
functions from fe-cancel.c.
Author: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
|
|
|
|
|
|
|
|
|
| |
In a follow up commit we'll need to free this connhost field in a
function defined in fe-cancel.c, so here we extract the logic to a
dedicated extern function.
Author: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In follow up commits we'll add more functions related to query
cancellations. This groups those all together instead of mixing them
with the other functions in fe-connect.c.
The formerly static parse_int_param() function had to be exported to
other libpq users, so it's been renamed pqParseIntParam() and moved to a
more reasonable place within fe-connect.c (rather than randomly between
various keepalive-related routines).
Author: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
|
|
|
|
|
|
|
|
| |
Reported-by: Michael Paquier
Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz
Backpatch-through: 12
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic to keep the libpq command queue in sync with queries that have
been processed had a bug when errors were returned for reasons other
than problems in queries -- for example, when a connection is lost. We
incorrectly consumed an element from the command queue every time, but
this is wrong and can lead to the queue becoming empty ahead of time,
leading to later malfunction: PQgetResult would return nothing,
potentially causing the calling application to enter a busy loop.
Fix by making the SYNC queue element a barrier that can only be consumed
when a SYNC message is received.
Backpatch to 14.
Reported by: Иван Трофимов (Ivan Trofimov) <i.trofimow@yandex.ru>
Discussion: https://postgr.es/m/17948-fcace7557e449957@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using GSSAPI encryption in non-blocking mode, libpq sometimes
failed with "GSSAPI caller failed to retransmit all data needing
to be retried". The cause is that pqPutMsgEnd rounds its transmit
request down to an even multiple of 8K, and sometimes that can lead
to not requesting a write of data that was requested to be written
(but reported as not written) earlier. That can upset pg_GSS_write's
logic for dealing with not-yet-written data, since it's possible
the data in question had already been incorporated into an encrypted
packet that we weren't able to send during the previous call.
We could fix this with a one-or-two-line hack to disable pqPutMsgEnd's
round-down behavior, but that seems like making the caller work around
a behavior that pg_GSS_write shouldn't expose in this way. Instead,
adjust pg_GSS_write to never report a partial write: it either
reports a complete write, or reflects the failure of the lower-level
pqsecure_raw_write call. The requirement still exists for the caller
to present at least as much data as on the previous call, but with
the caller-visible write start point not moving there is no temptation
for it to present less. We lose some ability to reclaim buffer space
early, but I doubt that that will make much difference in practice.
This also gets rid of a rather dubious assumption that "any
interesting failure condition (from pqsecure_raw_write) will recur
on the next try". We've not seen failure reports traceable to that,
but I've never trusted it particularly and am glad to remove it.
Make the same adjustments to the equivalent backend routine
be_gssapi_write(). It is probable that there's no bug on the backend
side, since we don't have a notion of nonblock mode there; but we
should keep the logic the same to ease future maintenance.
Per bug #18210 from Lars Kanis. Back-patch to all supported branches.
Discussion: https://postgr.es/m/18210-4c6d0b14627f2eb8@postgresql.org
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All supported computers have either POSIX or Windows threads, and we no
longer have any automated testing of --disable-thread-safety. We define
a vestigial ENABLE_THREAD_SAFETY macro to 1 in ecpg_config.h in case it
is useful, but we no longer test it anywhere in PostgreSQL code, and
associated dead code paths are removed.
The Meson and perl-based Windows build scripts never had an equivalent
build option.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA%2BhUKGLtmexrpMtxBRLCVePqV_dtWG-ZsEbyPrYc%2BNBB2TkNsw%40mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following routines are added to libpq:
PGresult *PQclosePrepared(PGconn *conn, const char *stmt);
PGresult *PQclosePortal(PGconn *conn, const char *portal);
int PQsendClosePrepared(PGconn *conn, const char *stmt);
int PQsendClosePortal(PGconn *conn, const char *portal);
The "send" routines are non-blocking versions of the two others.
Close messages are part of the protocol but they did not have a libpq
implementation. And, having these routines is for instance useful with
connection poolers as these can detect more easily Close messages
than DEALLOCATE queries.
The implementation takes advantage of what the Describe routines rely on
for portals and statements. Some regression tests are added in
libpq_pipeline, for the four new routines, by closing portals and
statements created already by the tests.
Author: Jelte Fennema
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAGECzQTb4xFAopAVokudB+L62Kt44mNAL4Z9zZ7UTrs1TRFvWA@mail.gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here are some notes about this change:
- As X509_get_signature_nid() should always exist (OpenSSL and
LibreSSL), hence HAVE_X509_GET_SIGNATURE_NID is now gone.
- OPENSSL_API_COMPAT is bumped to 0x10002000L.
- One comment related to 1.0.1e introduced by 74242c2 is removed.
Upstream OpenSSL still provides long-term support for 1.0.2 in a closed
fashion, so removing it is out of scope for a few years, at least.
Reviewed-by: Jacob Champion, Daniel Gustafsson
Discussion: https://postgr.es/m/ZG3JNursG69dz1lr@paquier.xyz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
That's how other boolean options are handled, so do likewise.
The previous coding with "enable" and "disable" was seemingly
modeled on gssencmode, but that's a three-way flag.
While at it, add PGGSSDELEGATION to the set of environment
variables cleared by pg_regress and Utils.pm.
Abhijit Menon-Sen, per gripe from Alvaro Herrera
Discussion: https://postgr.es/m/20230522091609.nlyuu4nolhycqs2p@alvherre.pgsql
|
|
|
|
|
|
|
|
|
|
| |
Complete the task begun in 9c0a0e2ed: we don't want to use the
abbreviation "deleg" for GSS delegation in any user-visible places.
(For consistency, this also changes most internal uses too.)
Abhijit Menon-Sen and Tom Lane
Discussion: https://postgr.es/m/949048.1684639317@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Run pgindent, pgperltidy, and reformat-dat-files.
This set of diffs is a bit larger than typical. We've updated to
pg_bsd_indent 2.1.2, which properly indents variable declarations that
have multi-line initialization expressions (the continuation lines are
now indented one tab stop). We've also updated to perltidy version
20230309 and changed some of its settings, which reduces its desire to
add whitespace to lines to make assignments etc. line up. Going
forward, that should make for fewer random-seeming changes to existing
code.
Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
|
|
|
|
|
|
|
|
|
| |
The changes done in this commit impact comments with no direct
user-visible changes, with fixes for incorrect function, variable or
structure names.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/e8c38840-596a-83d6-bd8d-cebc51111572@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 3d03b24c3 (Revert Add support for Kerberos
credential delegation) which was committed on the grounds of concern
about portability, but on further review and discussion, it's clear that
we are better off explicitly requiring MIT Kerberos as that appears to
be the only GSSAPI library currently that's under proper maintenance
and ongoing development. The API used for storing credentials was added
to MIT Kerberos over a decade ago while for the other libraries which
appear to be mainly based on Heimdal, which exists explicitly to be a
re-implementation of MIT Kerberos, the API never made it to a released
version (even though it was added to the Heimdal git repo over 5 years
ago..).
This post-feature-freeze change was approved by the RMT.
Discussion: https://postgr.es/m/ZDDO6jaESKaBgej0%40tamriel.snowman.net
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 3d4fa227bce4294ce1cc214b4a9d3b7caa3f0454.
Per discussion and buildfarm, this depends on APIs that seem to not
be available on at least one platform (NetBSD). Should be certainly
possible to rework to be optional on that platform if necessary but bit
late for that at this point.
Discussion: https://postgr.es/m/3286097.1680922218@sss.pgh.pa.us
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Support GSSAPI/Kerberos credentials being delegated to the server by a
client. With this, a user authenticating to PostgreSQL using Kerberos
(GSSAPI) credentials can choose to delegate their credentials to the
PostgreSQL server (which can choose to accept them, or not), allowing
the server to then use those delegated credentials to connect to
another service, such as with postgres_fdw or dblink or theoretically
any other service which is able to be authenticated using Kerberos.
Both postgres_fdw and dblink are changed to allow non-superuser
password-less connections but only when GSSAPI credentials have been
delegated to the server by the client and GSSAPI is used to
authenticate to the remote system.
Authors: Stephen Frost, Peifeng Qiu
Reviewed-By: David Christensen
Discussion: https://postgr.es/m/CO1PR05MB8023CC2CB575E0FAAD7DF4F8A8E29@CO1PR05MB8023.namprd05.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for load balancing connections with libpq using a
connection parameter: load_balance_hosts=<string>. When setting the
param to random, hosts and addresses will be connected to in random
order. This then results in load balancing across these addresses and
hosts when multiple clients or frequent connection setups are used.
The randomization employed performs two levels of shuffling:
1. The given hosts are randomly shuffled, before resolving them
one-by-one.
2. Once a host its addresses get resolved, the returned addresses
are shuffled, before trying to connect to them one-by-one.
Author: Jelte Fennema <postgres@jeltef.nl>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Andrey Borodin <amborodin86@gmail.com>
Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by libpq such that future improvements can alter for
example the order of entries.
As a nice side effect of this refactor the mechanism for iteration
over addresses in PQconnectPoll is now identical to its iteration
over hosts.
Author: Jelte Fennema <postgres@jeltef.nl>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Andrey Borodin <amborodin86@gmail.com>
Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace the hardcoded value with a GUC such that the iteration
count can be raised in order to increase protection against
brute-force attacks. The hardcoded value for SCRAM iteration
count was defined to be 4096, which is taken from RFC 7677, so
set the default for the GUC to 4096 to match. In RFC 7677 the
recommendation is at least 15000 iterations but 4096 is listed
as a SHOULD requirement given that it's estimated to yield a
0.5s processing time on a mobile handset of the time of RFC
writing (late 2015).
Raising the iteration count of SCRAM will make stored passwords
more resilient to brute-force attacks at a higher computational
cost during connection establishment. Lowering the count will
reduce computational overhead during connections at the tradeoff
of reducing strength against brute-force attacks.
There are however platforms where even a modest iteration count
yields a too high computational overhead, with weaker password
encryption schemes chosen as a result. In these situations,
SCRAM with a very low iteration count still gives benefits over
weaker schemes like md5, so we allow the iteration count to be
set to one at the low end.
The new GUC is intentionally generically named such that it can
be made to support future SCRAM standards should they emerge.
At that point the value can be made into key:value pairs with
an undefined key as a default which will be backwards compatible
with this.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Jonathan S. Katz <jkatz@postgresql.org>
Discussion: https://postgr.es/m/F72E7BC7-189F-4B17-BF47-9735EB72C364@yesql.se
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sslcertmode option controls whether the server is allowed and/or
required to request a certificate from the client. There are three
modes:
- "allow" is the default and follows the current behavior, where a
configured client certificate is sent if the server requests one
(via one of its default locations or sslcert). With the current
implementation, will happen whenever TLS is negotiated.
- "disable" causes the client to refuse to send a client certificate
even if sslcert is configured or if a client certificate is available in
one of its default locations.
- "require" causes the client to fail if a client certificate is never
sent and the server opens a connection anyway. This doesn't add any
additional security, since there is no guarantee that the server is
validating the certificate correctly, but it may helpful to troubleshoot
more complicated TLS setups.
sslcertmode=require requires SSL_CTX_set_cert_cb(), available since
OpenSSL 1.0.2. Note that LibreSSL does not include it.
Using a connection parameter different than require_auth has come up as
the simplest design because certificate authentication does not rely
directly on any of the AUTH_REQ_* codes, and one may want to require a
certificate to be sent in combination of a given authentication method,
like SCRAM-SHA-256.
TAP tests are added in src/test/ssl/, some of them relying on sslinfo to
check if a certificate has been set. These are compatible across all
the versions of OpenSSL supported on HEAD (currently down to 1.0.1).
Author: Jacob Champion
Reviewed-by: Aleksander Alekseev, Peter Eisentraut, David G. Johnston,
Michael Paquier
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new connection parameter require_auth allows a libpq client to
define a list of comma-separated acceptable authentication types for use
with the server. There is no negotiation: if the server does not
present one of the allowed authentication requests, the connection
attempt done by the client fails.
The following keywords can be defined in the list:
- password, for AUTH_REQ_PASSWORD.
- md5, for AUTH_REQ_MD5.
- gss, for AUTH_REQ_GSS[_CONT].
- sspi, for AUTH_REQ_SSPI and AUTH_REQ_GSS_CONT.
- scram-sha-256, for AUTH_REQ_SASL[_CONT|_FIN].
- creds, for AUTH_REQ_SCM_CREDS (perhaps this should be removed entirely
now).
- none, to control unauthenticated connections.
All the methods that can be defined in the list can be negated, like
"!password", in which case the server must NOT use the listed
authentication type. The special method "none" allows/disallows the use
of unauthenticated connections (but it does not govern transport-level
authentication via TLS or GSSAPI).
Internally, the patch logic is tied to check_expected_areq(), that was
used for channel_binding, ensuring that an incoming request is
compatible with conn->require_auth. It also introduces a new flag,
conn->client_finished_auth, which is set by various authentication
routines when the client side of the handshake is finished. This
signals to check_expected_areq() that an AUTH_REQ_OK from the server is
expected, and allows the client to complain if the server bypasses
authentication entirely, with for example the reception of a too-early
AUTH_REQ_OK message.
Regression tests are added in authentication TAP tests for all the
keywords supported (except "creds", because it is around only for
compatibility reasons). A new TAP script has been added for SSPI, as
there was no script dedicated to it yet. It relies on SSPI being the
default authentication method on Windows, as set by pg_regress.
Author: Jacob Champion
Reviewed-by: Peter Eisentraut, David G. Johnston, Michael Paquier
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
OpenSSL 1.1.1 and newer versions have added support for RSA-PSS
certificates, which requires the use of a specific routine in OpenSSL to
determine which hash function to use when compiling it when using
channel binding in SCRAM-SHA-256. X509_get_signature_nid(), that is the
original routine the channel binding code has relied on, is not able to
determine which hash algorithm to use for such certificates. However,
X509_get_signature_info(), new to OpenSSL 1.1.1, is able to do it. This
commit switches the channel binding logic to rely on
X509_get_signature_info() over X509_get_signature_nid(), which would be
the choice when building with 1.1.1 or newer.
The error could have been triggered on the client or the server, hence
libpq and the backend need to have their related code paths patched.
Note that attempting to load an RSA-PSS certificate with OpenSSL 1.1.0
or older leads to a failure due to an unsupported algorithm.
The discovery of relying on X509_get_signature_info() comes from Jacob,
the tests have been written by Heikki (with few tweaks from me), while I
have bundled the whole together while adding the bits needed for MSVC
and meson.
This issue exists since channel binding exists, so backpatch all the way
down. Some tests are added in 15~, triggered if compiling with OpenSSL
1.1.1 or newer, where the certificate and key files can easily be
generated for RSA-PSS.
Reported-by: Gunnar "Nick" Bluth
Author: Jacob Champion, Heikki Linnakangas
Discussion: https://postgr.es/m/17760-b6c61e752ec07060@postgresql.org
Backpatch-through: 11
|
|
|
|
| |
Backpatch-through: 11
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, receiving a NegotiateProtocolVersion message would result in a
confusing error message like
expected authentication request from server, but received v
This adds proper handling of this protocol message and produces an
on-topic error message from it.
Reviewed-by: Jacob Champion <jchampion@timescale.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f9c7862f-b864-8ef7-a861-c4638c83e209%40enterprisedb.com
|