| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
After fixing ngx_http_v3_encode_varlen_int() in 400eb1b628,
NGX_HTTP_V3_VARLEN_INT_LEN retained the old value of 4, which is
insufficient for the values over 1073741823 (1G - 1).
The NGX_HTTP_V3_VARLEN_INT_LEN macro is used in ngx_http_v3_uni.c to
format stream and frame types. Old buffer size is enough for formatting
this data. Also, the macro is used in ngx_http_v3_filter_module.c to
format output chunks and trailers. Considering output_buffers and
proxy_buffer_size are below 1G in all realistic scenarios, the old buffer
size is enough here as well.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previously, the expiration caused QUIC connection finalization even if
there are application-terminated streams finishing sending data. Such
finalization terminated these streams.
An easy way to trigger this is to request a large file from HTTP/3 over
a small MTU. In this case keepalive timeout expiration may abruptly
terminate the request stream.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Passwords were not preserved in optimized SSL contexts, the bug had
appeared in d791b4aab (1.23.1), as in the following configuration:
server {
proxy_ssl_password_file password;
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
location /original/ {
proxy_pass https://u1/;
}
location /optimized/ {
proxy_pass https://u2/;
}
}
The fix is to always preserve passwords, by copying to the configuration
pool, if dynamic certificates are used. This is done as part of merging
"ssl_passwords" configuration.
To minimize the number of copies, a preserved version is then used for
inheritance. A notable exception is inheritance of preserved empty
passwords to the context with statically configured certificates:
server {
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
location / {
proxy_pass ...;
proxy_ssl_certificate example.com.crt;
proxy_ssl_certificate_key example.com.key;
}
}
In this case, an unmodified version (NULL) of empty passwords is set,
to allow reading them from the password prompt on nginx startup.
As an additional optimization, a preserved instance of inherited
configured passwords is set to the previous level, to inherit it
to other contexts:
server {
proxy_ssl_password_file password;
location /1/ {
proxy_pass https://u1/;
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
}
location /2/ {
proxy_pass https://u2/;
proxy_ssl_certificate $ssl_server_name.crt;
proxy_ssl_certificate_key $ssl_server_name.key;
}
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was possible to write outside of the buffer used to keep UTF-8
decoded values when parsing conversion table configuration.
Since this happened before UTF-8 decoding, the fix is to check in
advance if character codes are of more than 3-byte sequence. Note
that this is already enforced by a later check for ngx_utf8_decode()
decoded values for 0xffff, which corresponds to the maximum value
encoded as a valid 3-byte sequence, so the fix does not affect the
valid values.
Found with AddressSanitizer.
Fixes GitHub issue #529.
|
|
|
|
|
|
|
|
| |
As uncovered by recent addition in slice.t, a partially initialized
context, coupled with HTTP 206 response from stub backend, might be
accessed in the next slice subrequest.
Found by bad memory allocator simulation.
|
|
|
|
| |
It appears to be a relic from prototype locking removed in b0b7b5a35.
|
|
|
|
|
| |
This makes it easier to understand why sessions may not be saved
in shared memory due to size.
|
|
|
|
|
|
| |
All such transient buffers are converted to the single storage in BSS.
In preparation to raise the limit.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, request might be left in inconsistent state in case of error,
which manifested in "http request count is zero" alerts when used by SSI
filter.
The fix is to reshuffle initialization order to postpone committing state
changes until after any potentially failing parts.
Found by bad memory allocator simulation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In OpenSSL, session resumption always happens in the default SSL context,
prior to invoking the SNI callback. Further, unlike in TLSv1.2 and older
protocols, SSL_get_servername() returns values received in the resumption
handshake, which may be different from the value in the initial handshake.
Notably, this makes the restriction added in b720f650b insufficient for
sessions resumed with different SNI server name.
Considering the example from b720f650b, previously, a client was able to
request example.org by presenting a certificate for example.org, then to
resume and request example.com.
The fix is to reject handshakes resumed with a different server name, if
verification of client certificates is enabled in a corresponding server
configuration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The directive sets a timeout during which a keepalive connection will
not be closed by nginx for connection reuse or graceful shutdown.
The change allows clients that send multiple requests over the same
connection without delay or with a small delay between them, to avoid
receiving a TCP RST in response to one of them. This excludes network
issues and non-graceful shutdown. As a side-effect, it also addresses
the TCP reset problem described in RFC 9112, Section 9.6, when the last
sent HTTP response could be damaged by a followup TCP RST. It is important
for non-idempotent requests, which cannot be retried by client.
It is not recommended to set keepalive_min_timeout to large values as
this can introduce an additional delay during graceful shutdown and may
restrict nginx from effective connection reuse.
|
|
|
|
|
|
| |
Caching is enabled with proxy_ssl_certificate_cache and friends.
Co-authored-by: Aleksei Bavshin <a.bavshin@nginx.com>
|
|
|
|
|
|
|
|
| |
A new directive "ssl_certificate_cache max=N [valid=time] [inactive=time]"
enables caching of SSL certificate chain and secret key objects specified
by "ssl_certificate" and "ssl_certificate_key" directives with variables.
Co-authored-by: Aleksei Bavshin <a.bavshin@nginx.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
It now uses 5/4 times more memory for the pending buffer.
Further, a single allocation is now used, which takes additional 56 bytes
for deflate_allocs in 64-bit mode aligned to 16, to store sub-allocation
pointers, and the total allocation size now padded up to 128 bytes, which
takes theoretically 200 additional bytes in total. This fits though into
"4 * (64 + sizeof(void*))" additional space for ZALLOC used in zlib-ng
2.1.x versions. The comment was updated to reflect this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Renaming a temporary file to an empty path ("") returns NGX_ENOPATH
with a subsequent ngx_create_full_path() to create the full path.
This function skips initial bytes as part of path separator lookup,
which causes out of bounds access on short strings.
The fix is to avoid renaming a temporary file to an obviously invalid
path, as well as explicitly forbid such syntax for literal values.
Although Coverity reports about potential type underflow, it is not
actually possible because the terminating '\0' is always included.
Notably, the run-time check is sufficient enough for Win32 as well.
Other short invalid values result either in NGX_ENOENT or NGX_EEXIST
and "MoveFile() .. failed" critical log messages, which involves a
separate error handling.
Prodded by Coverity (CID 1605485).
|
|
|
|
|
|
|
|
|
| |
This simplifies merging protocol values after ea15896 and ebd18ec.
Further, as outlined in ebd18ec18, for libraries preceeding TLSv1.2+
support, only meaningful versions TLSv1 and TLSv1.1 are set by default.
While here, fixed indentation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When cropping stsc atom, it's assumed that chunk index is never 0.
Based on this assumption, start_chunk and end_chunk are calculated
by subtracting 1 from it. If chunk index is zero, start_chunk or
end_chunk may underflow, which will later trigger
"start/end time is out mp4 stco chunks" error. The change adds an
explicit check for zero chunk index to avoid underflow and report
a proper error.
Zero chunk index is explicitly banned in ISO/IEC 14496-12, 8.7.4
Sample To Chunk Box. It's also implicitly banned in QuickTime File
Format specification. Description of chunk offset table references
"Chunk 1" as the first table element.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently an error is triggered if any of the chunk runs in stsc are
unordered. This however does not include the final chunk run, which
ends with trak->chunks + 1. The previous chunk index can be larger
leading to a 32-bit overflow. This could allow to skip the validity
check "if (start_sample > n)". This could later lead to a large
trak->start_chunk/trak->end_chunk, which would be caught later in
ngx_http_mp4_update_stco_atom() or ngx_http_mp4_update_co64_atom().
While there are no implications of the validity check being avoided,
the change still adds a check to ensure the final chunk run is ordered,
to produce a meaningful error and avoid a potential integer overflow.
|
|
|
|
|
|
|
|
|
|
|
| |
A specially crafted mp4 file with an empty run of chunks in the stsc atom
and a large value for samples per chunk for that run, combined with a
specially crafted request, allowed to store that large value in prev_samples
and later in trak->end_chunk_samples while in ngx_http_mp4_crop_stsc_data().
Later in ngx_http_mp4_update_stsz_atom() this could result in buffer
overread while calculating trak->end_chunk_samples_size.
Now the value of samples per chunk specified for an empty run is ignored.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
MSVC generates a compilation error in case #if/#endif is used in a macro
parameter.
|
|
|
|
|
|
|
|
|
|
| |
Previously, all upstream DNS entries would be immediately re-resolved
on config reload. With a large number of upstreams, this creates
a spike of DNS resolution requests. These spikes can overwhelm the
DNS server or cause drops on the network.
This patch retains the TTL of previous resolutions across reloads
by copying each upstream's name's expiry time across configuration
cycles. As a result, no additional resolutions are needed.
|
|
|
|
| |
The "resolver" and "resolver_timeout" directives can now be specified
directly in the "upstream" block.
|
|
|
|
|
|
|
|
| |
After configuration is reloaded, it may take some time for the
re-resolvable upstream servers to resolve and become available
as peers. During this time, client requests might get dropped.
Such servers are now pre-resolved using the "cache" of already
resolved peers from the old shared memory zone.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifying the upstream server by a hostname together with the
"resolve" parameter will make the hostname to be periodically
resolved, and upstream servers added/removed as necessary.
This requires a "resolver" at the "http" configuration block.
The "resolver_timeout" parameter also affects when the failed
DNS requests will be attempted again. Responses with NXDOMAIN
will be attempted again in 10 seconds.
Upstream has a configuration generation number that is incremented each
time servers are added/removed to the primary/backup list. This number
is remembered by the peer.init method, and if peer.get detects a change
in configuration, it returns NGX_BUSY.
Each server has a reference counter. It is incremented by peer.get and
decremented by peer.free. When a server is removed, it is removed from
the list of servers and is marked as "zombie". The memory allocated by
a zombie peer is freed only when its reference count becomes zero.
Co-authored-by: Roman Arutyunyan <arut@nginx.com>
Co-authored-by: Sergey Kandaurov <pluknet@nginx.com>
Co-authored-by: Vladimir Homutov <vl@nginx.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TLSv1 and TLSv1.1 are formally deprecated and forbidden to negotiate due
to insufficient security reasons outlined in RFC 8996.
TLSv1 and TLSv1.1 are disabled in BoringSSL e95b0cad9 and LibreSSL 3.8.1
in the way they cannot be enabled in nginx configuration. In OpenSSL 3.0,
they are only permitted at security level 0 (disabled by default).
The support is dropped in Chrome 84, Firefox 78, and deprecated in Safari.
This change disables TLSv1 and TLSv1.1 by default for OpenSSL 1.0.1 and
newer, where TLSv1.2 support is available. For older library versions,
which do not have alternatives, these protocol versions remain enabled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Starting from TLSv1.1 (as seen since draft-ietf-tls-rfc2246-bis-00),
the "certificate_authorities" field grammar of the CertificateRequest
message was redone to allow no distinguished names. In TLSv1.3, with
the restructured CertificateRequest message, this can be similarly
done by optionally including the "certificate_authorities" extension.
This allows to avoid sending DNs at all.
In practice, aside from published TLS specifications, all supported
SSL/TLS libraries allow to request client certificates with an empty
DN list for any protocol version. For instance, when operating in
TLSv1, this results in sending the "certificate_authorities" list as
a zero-length vector, which corresponds to the TLSv1.1 specification.
Such behaviour goes back to SSLeay.
The change relaxes the requirement to specify at least one trusted CA
certificate in the ssl_client_certificate directive, which resulted in
sending DNs of these certificates (closes #142). Instead, all trusted
CA certificates can be specified now using the ssl_trusted_certificate
directive if needed. A notable difference that certificates specified
in ssl_trusted_certificate are always loaded remains (see 3648ba7db).
Co-authored-by: Praveen Chaudhary <praveenc@nvidia.com>
|
|
|
|
| |
The directive allows to pass upstream response trailers to client.
|
|
|
|
|
|
| |
Unordered chunks could result in trak->end_chunk smaller than trak->start_chunk
in ngx_http_mp4_crop_stsc_data(). Later in ngx_http_mp4_update_stco_atom()
this caused buffer overread while trying to calculate trak->end_offset.
|
|
|
|
|
|
|
|
|
|
| |
While cropping an stsc atom in ngx_http_mp4_crop_stsc_data(), a 32-bit integer
overflow could happen, which could result in incorrect seeking and a very large
value stored in "samples". This resulted in a large invalid value of
trak->end_chunk_samples. This value is further used to calculate the value of
trak->end_chunk_samples_size in ngx_http_mp4_update_stsz_atom(). While doing
this, a large invalid value of trak->end_chunk_samples could result in reading
memory before stsz atom start. This could potentially result in a segfault.
|
|
|
|
|
|
|
| |
In some rare cases, graceful shutdown may happen while initializing an HTTP/2
connection. Previously, such a connection ignored the shutdown and remained
active. Now it is gracefully closed prior to processing any streams to
eliminate the shutdown delay.
|
|
|
|
| |
Previously, st->value was passed with NULL data pointer to header handlers.
|
|
|
|
|
|
|
|
|
|
|
|
| |
While inserting a new entry into the dynamic table, first the entry is added,
and then older entries are evicted until table size is within capacity. After
the first step, the number of entries may temporarily exceed the maximum
calculated from capacity by one entry, which previously caused table overflow.
The easiest way to trigger the issue is to keep adding entries with empty names
and values until first eviction.
The issue was introduced by 987bee4363d1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously a decoder stream was created on demand for sending Section
Acknowledgement, Stream Cancellation and Insert Count Increment. If conditions
for sending any of these instructions never happen, a decoder stream is not
created at all. These conditions include client not using the dynamic table and
no streams abandoned by server (RFC 9204, Section 2.2.2.2). However RFC 9204,
Section 4.2 defines only one condition for not creating a decoder stream:
An endpoint MAY avoid creating a decoder stream if its decoder sets
the maximum capacity of the dynamic table to zero.
The change enables pre-creation of the decoder stream at HTTP/3 session
initialization if maximum dynamic table capacity is not zero. Note that this
value is currently hardcoded to 4096 bytes and is not configurable, so the
stream is now always created.
Also, the change fixes a potential stack overflow when creating a decoder
stream in ngx_http_v3_send_cancel_stream() while draining a request stream by
ngx_drain_connections(). Creating a decoder stream involves calling
ngx_get_connection(), which calls ngx_drain_connections(), which will drain the
same request stream again. If client's MAX_STREAMS for uni stream is high
enough, these recursive calls will continue until we run out of stack.
Otherwise, decoder stream creation will fail at some point and the request
stream connection will be drained. This may result in use-after-free, since
this connection could still be referenced up the stack.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previously chain links could sometimes be dropped instead of being reused,
which could result in increased memory consumption during long requests.
A similar chain link issue in ngx_http_gzip_filter_module was fixed in
da46bfc484ef (1.11.10).
Based on a patch by Sangmin Lee.
|
|
|
|
|
|
|
| |
Previously, a request body larger than declared in Content-Length resulted in
a 413 status code, because Content-Length was mistakenly used as the maximum
allowed request body, similar to client_max_body_size. Following the HTTP/3
specification, such requests are now rejected with the 400 error as malformed.
|
|
|
|
|
|
|
|
|
| |
Previously, the response text wasn't initialized and the rewrite module
was sending response body set to NULL.
Found with UndefinedBehaviorSanitizer (pointer-overflow).
Signed-off-by: Piotr Sikora <piotr@aviatrix.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, it could result when left-shifting signed integer due to implicit
integer promotion, such that the most significant bit appeared on the sign bit.
In practice, though, this results in the same left value as with an explicit
cast, at least on known compilers, such as GCC and Clang. The reason is that
in_addr_t, which is equivalent to uint32_t and same as "unsigned int" in ILP32
and LP64 data type models, has the same type width as the intermediate after
integer promotion, so there's no side effects such as sign-extension. This
explains why adding an explicit cast does not change object files in practice.
Found with UndefinedBehaviorSanitizer (shift).
Based on a patch by Piotr Sikora.
|
|
|
|
|
|
|
|
|
|
|
|
| |
While copying ngx_http_variable_value_t structures to geo binary base
in ngx_http_geo_copy_values(), and similarly in the stream module,
uninitialized parts of these structures are copied as well. These
include the "escape" field and possible holes. Calculating crc32 of
this data triggers uninitialized memory access.
Found with MemorySanitizer.
Signed-off-by: Piotr Sikora <piotr@aviatrix.com>
|
| |
|
|
|
|
|
|
|
| |
Now "fastopen", "backlog", "accept_filter", "deferred", and "so_keepalive"
parameters are not allowed with "quic" in the "listen" directive.
Reported by Izorkin.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When filter finalization is triggered when working with an upstream server,
and error_page redirects request processing to some simple handler,
ngx_http_request_finalize() triggers request termination when the response
is sent. In particular, via the upstream cleanup handler, nginx will close
the upstream connection and the corresponding socket.
Still, this can happen to be with ngx_event_pipe() on stack. While
the code will set p->downstream_error due to NGX_ERROR returned from the
output filter chain by filter finalization, otherwise the error will be
ignored till control returns to ngx_http_upstream_process_request().
And event pipe might try reading from the (already closed) socket, resulting
in "readv() failed (9: Bad file descriptor) while reading upstream" errors
(or even segfaults with SSL).
Such errors were seen with the following configuration:
location /t2 {
proxy_pass http://127.0.0.1:8080/big;
image_filter_buffer 10m;
image_filter resize 150 100;
error_page 415 = /empty;
}
location /empty {
return 204;
}
location /big {
# big enough static file
}
Fix is to clear p->upstream in ngx_http_upstream_finalize_request(),
and ensure that p->upstream is checked in ngx_event_pipe_read_upstream()
and when handling events at ngx_event_pipe() exit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a request was terminated due to an error via ngx_http_terminate_request()
while an AIO operation was running in a subrequest, various issues were
observed. This happened because ngx_http_request_finalizer() was only set
in the subrequest where ngx_http_terminate_request() was called, but not
in the subrequest where the AIO operation was running. After completion
of the AIO operation normal processing of the subrequest was resumed, leading
to issues.
In particular, in case of the upstream module, termination of the request
called upstream cleanup, which closed the upstream connection. Attempts to
further work with the upstream connection after AIO operation completion
resulted in segfaults in ngx_ssl_recv(), "readv() failed (9: Bad file
descriptor) while reading upstream" errors, or socket leaks.
In ticket #2555, issues were observed with the following configuration
with cache background update (with thread writing instrumented to
introduce a delay, when a client closes the connection during an update):
location = /background-and-aio-write {
proxy_pass ...
proxy_cache one;
proxy_cache_valid 200 1s;
proxy_cache_background_update on;
proxy_cache_use_stale updating;
aio threads;
aio_write on;
limit_rate 1000;
}
Similarly, the same issue can be seen with SSI, and can be caused by
errors in subrequests, such as in the following configuration
(where "/proxy" uses AIO, and "/sleep" returns 444 after some delay,
causing request termination):
location = /ssi-active-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/proxy" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
Or the same with both AIO operation and the error in non-active subrequests
(which needs slightly different handling, see below):
location = /ssi-non-active-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/static" -->
<!--#include virtual="/proxy" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
Similarly, issues can be observed with just static files. However,
with static files potential impact is limited due to timeout safeguards
in ngx_http_writer(), and the fact that c->error is set during request
termination.
In a simple configuration with an AIO operation in the active subrequest,
such as in the following configuration, the connection is closed right
after completion of the AIO operation anyway, since ngx_http_writer()
tries to write to the connection and fails due to c->error set:
location = /ssi-active-static-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/static-aio" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
In the following configuration, with an AIO operation in a non-active
subrequest, the connection is closed only after send_timeout expires:
location = /ssi-non-active-static-boom {
ssi on;
ssi_types *;
return 200 '
<!--#include virtual="/static" -->
<!--#include virtual="/static-aio" -->
<!--#include virtual="/sleep" -->
';
limit_rate 1000;
}
Fix is to introduce r->main->terminated flag, which is to be checked
by AIO event handlers when the r->main->blocked counter is decremented.
When the flag is set, handlers are expected to wake up the connection
instead of the subrequest (which might be already cleaned up).
Additionally, now ngx_http_request_finalizer() is always set in the
active subrequest, so waking up the connection properly finalizes the
request even if termination happened in a non-active subrequest.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each AIO (thread IO) operation being run is now accompanied with 1-minute
timer. This timer prevents unexpected shutdown of the worker process while
an AIO operation is running, and logs an alert if the operation is running
for too long.
This fixes "open socket left" alerts during worker processes shutdown
due to pending AIO (or thread IO) operations while corresponding requests
have no timers. In particular, such errors were observed while reading
cache headers (ticket #2162), and with worker_shutdown_timeout.
|