| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The change implements processing upstream early hints response in
ngx_http_proxy_module and ngx_http_grpc_module. A new directive
"early_hints" enables sending early hints to the client. By default,
sending early hints is disabled.
Example:
map $http_sec_fetch_mode $early_hints {
navigate $http2$http3;
}
early_hints $early_hints;
proxy_pass http://example.com;
|
| |
|
| |
|
|
|
|
|
|
|
| |
In some rare cases, graceful shutdown may happen while initializing an HTTP/2
connection. Previously, such a connection ignored the shutdown and remained
active. Now it is gracefully closed prior to processing any streams to
eliminate the shutdown delay.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As part of normal HTTP/2 processing, incomplete frames are saved in the
control state using a fixed size memcpy of NGX_HTTP_V2_STATE_BUFFER_SIZE.
For this matter, two state buffers are reserved in the HTTP/2 recv buffer.
As part of HTTP/2 auto-detection on plain TCP connections, initial data
is first read into a buffer specified by the client_header_buffer_size
directive that doesn't have state reservation. Previously, this made it
possible to over-read the buffer as part of saving the state.
The fix is to read the available buffer size rather than a fixed size.
Although memcpy of a fixed size can produce a better optimized code,
handling of incomplete frames isn't a common execution path, so it was
sacrificed for the sake of simplicity of the fix.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To ensure that attempts to flood servers with many streams are detected
early, a limit of no more than 2 * max_concurrent_streams new streams per one
event loop iteration was introduced. This limit is applied even if
max_concurrent_streams is not yet reached - for example, if corresponding
streams are handled synchronously or reset.
Further, refused streams are now limited to maximum of max_concurrent_streams
and 100, similarly to priority_limit initial value, providing some tolerance
to clients trying to open several streams at the connection start, yet
low tolerance to flooding attempts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although it has better implementation status than HTTP/3 server push,
it remains of limited use, with adoption numbers seen as negligible.
Per IETF 102 materials, server push was used only in 0.04% of sessions.
It was considered to be "difficult to use effectively" in RFC 9113.
Its use is further limited by badly matching to fetch/cache/connection
models in browsers, see related discussions linked from [1].
Server push was disabled in Chrome 106 [2].
The http2_push, http2_push_preload, and http2_max_concurrent_pushes
directives are made obsolete. In particular, this essentially reverts
7201:641306096f5b and 7207:3d2b0b02bd3d.
[1] https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/
[2] https://chromestatus.com/feature/6302414934114304
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The directive enables HTTP/2 in the current server. The previous way to
enable HTTP/2 via "listen ... http2" is now deprecated. The new approach
allows to share HTTP/2 and HTTP/0.9-1.1 on the same port.
For SSL connections, HTTP/2 is now selected by ALPN callback based on whether
the protocol is enabled in the virtual server chosen by SNI. This however only
works since OpenSSL 1.0.2h, where ALPN callback is invoked after SNI callback.
For older versions of OpenSSL, HTTP/2 is enabled based on the default virtual
server configuration.
For plain TCP connections, HTTP/2 is now auto-detected by HTTP/2 preface, if
HTTP/2 is enabled in the default virtual server. If preface is not matched,
HTTP/0.9-1.1 is assumed.
|
|
|
|
|
| |
Similarly to 7192:d5a535774861, this avoids spurious zero statuses
in access.log, and in line with other header-related errors.
|
|
|
|
|
|
|
|
| |
Similarly to ticket #274 (7354:1812f1d79d84), early request finalization
without calling ngx_http_run_posted_requests() resulted in a connection
hang (a socket leak) if the 400 (Bad Request) error was generated in
ngx_http_v2_state_process_header() due to invalid request headers and
"return 444" was used in error_page 400.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multi headers are now using linked lists instead of arrays. Notably,
the following fields were changed: r->headers_in.cookies (renamed
to r->headers_in.cookie), r->headers_in.x_forwarded_for,
r->headers_out.cache_control, r->headers_out.link, u->headers_in.cache_control
u->headers_in.cookies (renamed to u->headers_in.set_cookie).
The r->headers_in.cookies and u->headers_in.cookies fields were renamed
to r->headers_in.cookie and u->headers_in.set_cookie to match header names.
The ngx_http_parse_multi_header_lines() and ngx_http_parse_set_cookie_lines()
functions were changed accordingly.
With this change, multi headers are now essentially equivalent to normal
headers, and following changes will further make them equivalent.
|
|
|
|
|
|
|
|
|
|
|
| |
With large http2_max_concurrent_streams or http2_max_concurrent_pushes, more
than 255 ngx_http_v2_node_t structures might be allocated, eventually leading
to h2c->closed_nodes overflow when closing corresponding streams. This will
in turn result in additional allocations in ngx_http_v2_get_node_by_id().
While mostly harmless, it can result in excessive memory usage by a HTTP/2
connection, notably in configurations with many keepalive_requests allowed.
Fix is to use ngx_uint_t for h2c->closed_nodes instead of unsigned:8.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Response headers can be buffered in the SSL buffer. But stream's fake
connection buffered flag did not reflect this, so any attempts to flush
the buffer without sending additional data were stopped by the write filter.
It does not seem to be possible to reflect this in fc->buffered though, as
we never known if main connection's c->buffered corresponds to the particular
stream or not. As such, fc->buffered might prevent request finalization
due to sending data on some other stream.
Fix is to implement handling of flush buffers when the c->need_flush_buf
flag is set, similarly to the existing last buffer handling. The same
flag is now used for UDP sockets in the stream module instead of explicit
checking of c->type.
|
|
|
|
|
| |
ngx_http_v2_huff_decode.c and ngx_http_v2_huff_encode.c are renamed
to ngx_http_huff_decode.c and ngx_http_huff_encode.c.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With sendfile() in threads ("aio threads; sendfile on;"), client connection
can block on writing, waiting for sendfile() to complete. In HTTP/2 this
might result in the request hang, since an attempt to continue processing
in thread event handler will call request's write event handler, which
is usually stopped by ngx_http_v2_send_chain(): it does nothing if there
are no additional data and stream->queued is set. Further, HTTP/2 resets
stream's c->write->ready to 0 if writing blocks, so just fixing
ngx_http_v2_send_chain() is not enough.
Can be reproduced with test suite on Linux with:
TEST_NGINX_GLOBALS_HTTP="aio threads; sendfile on;" prove h2*.t
The following tests currently fail: h2_keepalive.t, h2_priority.t,
h2_proxy_max_temp_file_size.t, h2.t, h2_trailers.t.
Similarly, sendfile() with AIO preloading on FreeBSD can block as well,
with similar results. This is, however, harder to reproduce, especially
on modern FreeBSD systems, since sendfile() usually does not return EBUSY.
Fix is to modify ngx_http_v2_send_chain() so it actually tries to send
data to the main connection when called, and to make sure that
c->write->ready is set by the relevant event handlers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NPN was replaced with ALPN, published as RFC 7301 in July 2014.
It used to negotiate SPDY (and, in transition, HTTP/2).
NPN supported appeared in OpenSSL 1.0.1. It does not work with TLSv1.3 [1].
ALPN is supported since OpenSSL 1.0.2.
The NPN support was dropped in Firefox 53 [2] and Chrome 51 [3].
[1] https://github.com/openssl/openssl/issues/3665.
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1248198
[3] https://www.chromestatus.com/feature/5767920709795840
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The request body filter chain is no longer called after processing
a DATA frame. Instead, we now post a read event to do this. This
ensures that multiple small DATA frames read during the same event loop
iteration are coalesced together, resulting in much faster processing.
Since rb->buf can now contain unprocessed data, window update is no
longer sent in ngx_http_v2_state_read_data() in case of flow control
being used due to filter buffering. Instead, window will be updated
by ngx_http_v2_read_client_request_body_handler() in the posted read
event.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following rb->filter_need_buffering changes, request body reading is
only finished after the filter chain is called and rb->last_saved is set.
As such, with r->request_body_no_buffering, timer on fc->read is no
longer removed when the last part of the body is received, potentially
resulting in incorrect behaviour.
The fix is to call ngx_http_v2_process_request_body() from the
ngx_http_v2_read_unbuffered_request_body() function instead of
directly calling ngx_http_v2_filter_request_body(), so the timer
is properly removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the body read handler, the window was incorrectly calculated
based on the full buffer size instead of the amount of free space
in the buffer. If the request body is buffered by a filter, and
the buffer is not empty after the read event is generated by the
filter to resume request body processing, this could result in
"http2 negative window update" alerts.
Further, in the body ready handler and in ngx_http_v2_state_read_data()
the buffer wasn't cleared when the data were already written to disk,
so the client might stuck without window updates.
|
| |
|
|
|
|
| |
Prodded by Clang Static Analyzer.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If a filter wants to buffer the request body during reading (for
example, to check an external scanner), it can now do so. To make
it possible, the code now checks rb->last_saved (introduced in the
previous change) along with rb->rest == 0.
Since in HTTP/2 this requires flow control to avoid overflowing the
request body buffer, so filters which need buffering have to set
the rb->filter_need_buffering flag on the first filter call. (Note
that each filter is expected to call the next filter, so all filters
will be able set the flag if needed.)
|
|
|
|
|
|
| |
Previously, fully preread unbuffered requests larger than client body
buffer size were saved to disk, despite the fact that "unbuffered" is
expected to imply no disk buffering.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The save body filter saves the request body to disk once the buffer is full.
Yet in HTTP/2 this might happen even if there is no need to save anything
to disk, notably when content length is known and the END_STREAM flag is
sent in a separate empty DATA frame. Workaround is to provide additional
byte in the buffer, so saving the request body won't be triggered.
This fixes unexpected request body disk buffering in HTTP/2 observed after
the previous change when content length is known and the END_STREAM flag
is sent in a separate empty DATA frame.
|
|
|
|
|
|
|
|
|
| |
In particular, now the code always uses a buffer limited by
client_body_buffer_size. At the cost of an additional copy it
ensures that small DATA frames are not directly mapped to small
write() syscalls, but rather buffered in memory before writing.
Further, requests without Content-Length are no longer forced
to use temporary files.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Control characters (0x00-0x1f, 0x7f), space, and colon were never allowed in
header names. The only somewhat valid use is header continuation which nginx
never supported and which is explicitly obsolete by RFC 7230.
Previously, such headers were considered invalid and were ignored by default
(as per ignore_invalid_headers directive). With this change, such headers
are unconditionally rejected.
It is expected to make nginx more resilient to various attacks, in particular,
with ignore_invalid_headers switched off (which is inherently unsecure, though
nevertheless sometimes used in the wild).
|
|
|
|
|
|
|
|
|
| |
No valid CONNECT requests are expected to appear within nginx, since it
is not a forward proxy. Further, request line parsing will reject
proper CONNECT requests anyway, since we don't allow authority-form of
request-target. On the other hand, RFC 7230 specifies separate message
length rules for CONNECT which we don't support, so make sure to always
reject CONNECTs to avoid potential abuse.
|
|
|
|
|
|
|
| |
Similar to lingering_time, it limits total connection lifetime before
keepalive is switched off. The default is 1 hour, which is close to
the total maximum connection lifetime possible with default
keepalive_requests and keepalive_timeout.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Firefox uses several idle streams for PRIORITY frames[1], and
"http2_max_concurrent_streams 1;" results in "client sent too many
PRIORITY frames" errors when a connection is established by Firefox.
Fix is to relax the PRIORITY frames limit to use at least 100 as
the initial value (which is the recommended by the HTTP/2 protocol
minimum limit on the number of concurrent streams, so it is not
unreasonable for clients to assume that similar number of idle streams
can be used for prioritization).
[1] https://hg.mozilla.org/mozilla-central/file/32a9e6e145d6e3071c3993a20bb603a2f388722b/netwerk/protocol/http/Http2Stream.cpp#l1270
|
|
|
|
|
|
|
|
| |
Without explicit handling, a zero timer was actually added, leading to
multiple unneeded syscalls. Further, sending GOAWAY frame early might
be beneficial for clients.
Reported by Sergey Kandaurov.
|
|
|
|
|
|
|
|
|
|
| |
With this change, behaviour of HTTP/2 becomes even closer to HTTP/1.x,
and client_header_timeout instead of keepalive_timeout is used before
the first request is received.
This fixes HTTP/2 connections being closed even before the first request
if "keepalive_timeout 0;" was used in the configuration; the problem
appeared in f790816a0e87 (1.19.7).
|
|
|
|
|
| |
Instead, size of one large_client_header_buffers buffer and all large
client header buffers are used.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, PINGs and other frames extended possible keepalive time,
making it possible to keep an open HTTP/2 connection for a long time.
Now the connection is always closed as long as keepalive_timeout expires,
similarly to how it happens in HTTP/1.x.
Note that as a part of this change, incomplete frames are no longer
trigger a separate timeout, so http2_recv_timeout (replaced by
client_header_timeout in previous patches) is essentially cancelled.
The client_header_timeout is, however, used for SSL handshake and
while reading HEADERS frames.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead, keepalive_timeout and keepalive_requests are now used. This
is expected to simplify HTTP/2 code and usage. This also matches
directives used by upstream module for all protocols.
In case of default settings, this effectively changes maximum number
of requests per connection from 1000 to 100. This looks acceptable,
especially given that HTTP/2 code now properly supports lingering close.
Further, this changes default keepalive timeout in HTTP/2 from 300 seconds
to 75 seconds. This also looks acceptable, and larger than PING interval
used by Firefox (network.http.spdy.ping-threshold defaults to 58s),
the only browser to use PINGs.
|
|
|
|
|
|
|
| |
Instead, the client_header_timeout is now used for HTTP/2 reading.
Further, the timeout is changed to be set once till no further data
left to read, similarly to how client_header_timeout is used in other
places.
|
|
|
|
|
| |
The spdy_* directives are not available since introduction of HTTP/2 module
in nginx 1.9.5 more than five years ago.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New connections are marked reusable by ngx_http_init_connection() if there
are no data available for reading. As a result, if SSL is not used,
ngx_http_v2_init() might be called when the connection is marked reusable.
If a HEADERS frame is immediately available for reading, this resulted
in connection being preserved in reusable state with an active request,
and possibly closed later as if during worker shutdown (that is, after
all active requests were finalized).
Fix is to explicitly mark connections non-reusable in ngx_http_v2_init()
instead of (incorrectly) assuming they are already non-reusable.
Found by Sergey Kandaurov.
|
|
|
|
| |
Prodded by Taewoo Kim.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is particularly important in HTTP/2, where keepalive connections
are closed with lingering. Before the patch, reusing a keepalive HTTP/2
connection resulted in the connection waiting for lingering close to
remain in the reusable connections queue, preventing ngx_drain_connections()
from closing additional connections.
The patch fixes it by marking the connection reusable again, and so
moving it in the reusable connections queue. Further, it makes actually
possible to reuse such connections if needed.
|
|
|
|
|
|
|
| |
This part somehow slipped away from c5840ca2063d.
While it is not expected to be needed in case of lingering close,
it is good to keep it for correctness (see 2b5528023f6b).
|
|
|
|
|
|
|
| |
Ensure c->recv is properly reset to ngx_recv if SSL_shutdown()
blocks on writing.
The bug had appeared in 554c6ae25ffc.
|
|
|
|
|
|
|
|
| |
When doing lingering close, the socket was first shut down for writing,
so SSL shutdown initiated after lingering close was not able to send
the close_notify alerts (ticket #2056).
The fix is to call ngx_ssl_shutdown() before shutting down the socket.
|
|
|
|
|
|
| |
HTTP/2 code failed to run posted requests after calling the request body
handler, and this resulted in connection hang if a subrequest was created
in the body handler and no other actions were made.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If 400 errors were redirected to an upstream server using the error_page
directive, DATA frames from the client might cause segmentation fault
due to null pointer dereference. The bug had appeared in 6989:2c4dbcd6f2e4
(1.13.0).
Fix is to skip such frames in ngx_http_v2_state_read_data() (similarly
to 7561:9f1f9d6e056a). With the fix, behaviour of 400 errors in HTTP/2
is now similar to one in HTTP/1.x, that is, nginx doesn't try to read the
request body.
Note that proxying 400 errors, as well as other early stage errors, to
upstream servers might not be a good idea anyway. These errors imply
that reading and processing of the request (and the request headers)
wasn't complete, and proxying of such incomplete request might lead to
various errors.
Reported by Chenglong Zhang.
|
|
|
|
| |
Prodded by Xu Yang.
|
|
|
|
|
|
|
| |
Without the flag, SSL shutdown is attempted on such connections,
resulting in useless work and/or bogus "SSL_shutdown() failed
(SSL: ... bad write retry)" critical log messages if there are
blocked writes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After sending the GOAWAY frame, a connection is now closed using
the lingering close mechanism.
This allows for the reliable delivery of the GOAWAY frames, while
also fixing connection resets observed when http2_max_requests is
reached (ticket #1250), or with graceful shutdown (ticket #1544),
when some additional data from the client is received on a fully
closed connection.
For HTTP/2, the settings lingering_close, lingering_timeout, and
lingering_time are taken from the "server" level.
|
|
|
|
|
|
|
|
| |
Previously, invalid connection preface errors were only logged at debug
level, providing no visible feedback, in particular, when a plain text
HTTP/2 listening socket is erroneously used for HTTP/1.x connections.
Now these are explicitly logged at the info level, much like other
client-related errors.
|
|
|
|
|
|
|
|
|
|
|
| |
A connection could get stuck without timers if a client has partially sent
the HEADERS frame such that it was split on the individual header boundary.
In this case, it cannot be processed without the rest of the HEADERS frame.
The fix is to call ngx_http_v2_state_headers_save() in this case. Normally,
it would be called from the ngx_http_v2_state_header_block() handler on the
next iteration, when there is not enough data to continue processing. This
isn't the case if recv_buffer became empty and there's no more data to read.
|