| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous behaviour was to pass everything to the client, but this
seems to be suboptimal and causes issues (ticket #1695). Fix is to
drop extra data instead, as it naturally happens in most clients.
Additionally, we now also issue a warning if the response is too
short, and make sure the fact it is truncated is propagated to the
client. The u->error flag is introduced to make it possible to
propagate the error to the client in case of unbuffered proxying.
For responses to HEAD requests there is an exception: we do allow
both responses without body and responses with body matching the
Content-Length header.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous behaviour was to pass everything to the client, but this
seems to be suboptimal and causes issues (ticket #1695). Fix is to
drop extra data instead, as it naturally happens in most clients.
This change covers generic buffered and unbuffered filters as used
in the scgi and uwsgi modules. Appropriate input filter init
handlers are provided by the scgi and uwsgi modules to set corresponding
lengths.
Note that for responses to HEAD requests there is an exception:
we do allow any response length. This is because responses to HEAD
requests might be actual full responses, and it is up to nginx
to remove the response body. If caching is enabled, only full
responses matching the Content-Length header will be cached
(see b779728b180c).
|
| |
|
|
|
|
|
|
|
|
| |
With level-triggered event methods it is important to specify
the NGX_CLOSE_EVENT flag to ngx_handle_read_event(), otherwise
the event won't be removed, resulting in CPU hog.
Reported by Patrick Wollgast.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In case of filter finalization, essential request fields like r->uri,
r->args etc could be changed, which affected the cache update subrequest.
Also, after filter finalization r->cache could be set to NULL, leading to
null pointer dereference in ngx_http_upstream_cache_background_update().
The fix is to create background cache update subrequest before sending the
cached response.
Since initial introduction in 1aeaae6e9446 (1.11.10) background cache update
subrequest was created after sending the cached response because otherwise it
blocked the parent request output. In 9552758a786e (1.13.1) background
subrequests were introduced to eliminate the delay before sending the final
part of the cached response. This also made it possible to create the
background cache update subrequest before sending the response.
Note that creating the subrequest earlier does not change the fact that in case
of filter finalization the background cache update subrequest will likely not
have enough time to successfully update the cache entry. Filter finalization
leads to the main request termination as soon the current iteration of request
processing is complete.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Variables now do not depend on presence of the HTTP status code in response.
If the corresponding event occurred, variables contain time between request
creation and the event, and "-" otherwise.
Previously, intermediate value of the $upstream_response_time variable held
unix timestamp.
|
|
|
|
|
| |
The directives enable the use of the SO_KEEPALIVE option on
upstream connections. By default, the value is left unchanged.
|
|
|
|
|
|
|
|
| |
The problem does not manifest itself currently, because in case of
non-buffered reading, chain link created by u->create_request method
consists of a single element.
Found by PVS-Studio.
|
|
|
|
|
|
|
|
|
|
|
| |
The directive configures maximum number of requests allowed on
a connection kept in the cache. Once a connection reaches the number
of requests configured, it is no longer saved to the cache.
The default is 100.
Much like keepalive_requests for client connections, this is mostly
a safeguard to make sure connections are closed periodically and the
memory allocated from the connection pool is freed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In TLSv1.3, NewSessionTicket messages arrive after the handshake and
can come at any time. Therefore we use a callback to save the session
when we know about it. This approach works for < TLSv1.3 as well.
The callback function is set once per location on merge phase.
Since SSL_get_session() in BoringSSL returns an unresumable session for
TLSv1.3, peer save_session() methods have been updated as well to use a
session supplied within the callback. To preserve API, the session is
cached in c->ssl->session. It is preferably accessed in save_session()
methods by ngx_ssl_get_session() and ngx_ssl_get0_session() wrappers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With gRPC it is possible that a request sending is blocked due to flow
control. Moreover, further sending might be only allowed once the
backend sees all the data we've already sent. With such a backend
it is required to clear the TCP_NOPUSH socket option to make sure all
the data we've sent are actually delivered to the backend.
As such, we now clear TCP_NOPUSH in ngx_http_upstream_send_request()
also on NGX_AGAIN if c->write->ready is set. This fixes a test (which
waits for all the 64k bytes as per initial window before allowing more
bytes) with sendfile enabled when the body was written to a file
in a different context.
|
|
|
|
|
|
|
| |
Now tcp_nopush on peer connections is disabled if it is disabled on
the client connection, similar to how we handle c->sendfile. Previously,
tcp_nopush was always used on upstream connections, regardless of
the "tcp_nopush" directive.
|
|
|
|
|
|
|
|
| |
With u->conf->preserve_output set the request body file might be used
after the response header is sent, so avoid cleaning it. (Normally
this is not a problem as u->conf->preserve_output is only set with
r->request_body_no_buffering, but the request body might be already
written to a file in a different context.)
|
|
|
|
|
|
|
|
|
| |
Previously, ngx_http_upstream_process_header() might be called after
we've finished reading response headers and switched to a different read
event handler, leading to errors with gRPC proxying. Additionally,
the u->conf->read_timeout timer might be re-armed during reading response
headers (while this is expected to be a single timeout on reading
the whole response header).
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, ngx_http_upstream_test_next() used an outdated condition on
whether it will be possible to switch to a different server or not. It
did not take into account restrictions on non-idempotent requests, requests
with non-buffered request body, and the next upstream timeout.
For such requests, switching to the next upstream server was rejected
later in ngx_http_upstream_next(), resulting in nginx own error page
being returned instead of the original upstream response.
|
|
|
|
| |
No functional changes.
|
|
|
|
|
|
|
| |
The flag can be used to continue sending request body even after we've
got a response from the backend. In particular, this is needed for gRPC
proxying of bidirectional streaming RPCs, and also to send control frames
in other forms of RPCs.
|
|
|
|
|
|
|
|
| |
The flag indicates whether last ngx_output_chain() returned NGX_AGAIN
or not. If the flag is set, we arm the u->conf->send_timeout timer.
The flag complements c->write->ready test, and allows to stop sending
the request body in an output filter due to protocol-specific flow
control.
|
|
|
|
|
|
|
|
|
|
| |
Basic trailer headers support allows one to access response trailers
via the $upstream_trailer_* variables.
Additionally, the u->conf->pass_trailers flag was introduced. When the
flag is set, trailer headers from the upstream response are passed to
the client. Like normal headers, trailer headers will be hidden
if present in u->conf->hide_headers_hash.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, only the upstream response body could be accessed with the
NGX_HTTP_SUBREQUEST_IN_MEMORY feature. Now any response body from a subrequest
can be saved in a memory buffer. It is available as a single buffer in r->out
and the buffer size is configured by the subrequest_output_buffer_size
directive.
Upstream, proxy and fastcgi code used to handle the old-style feature is
removed.
|
| |
|
|
|
|
|
| |
After 1e720b0be7ec, it's neither specially processed nor copied
when redirecting with X-Accel-Redirect.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Following ad3f342f14ba046c (1.9.13), it is possible that a request where
header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY,
triggering an attempt to return additional error response and the
"header already sent" alert as a result.
In particular, it is trivial to reproduce the problem with a HEAD request
and caching enabled. With caching enabled nginx will change HEAD to GET
and will set u->pipe->downstream_error to suppress sending the response
body to the client. When a backend-related error occurs (for example,
proxy_read_timeout expires), ngx_http_finalize_upstream_request() will
be called with NGX_HTTP_BAD_GATEWAY. After ad3f342f14ba046c this will
result in ngx_http_finalize_request(NGX_HTTP_BAD_GATEWAY).
Fix is to move u->pipe->downstream_error handling to a later point,
where all special response codes are changed to NGX_ERROR.
Reported by Jan Prachar,
http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html.
|
|
|
|
|
|
|
|
| |
The capability is retained automatically in unprivileged worker processes after
changing UID if transparent proxying is enabled at least once in nginx
configuration.
The feature is only available in Linux.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the data to write is bigger than what the socket can send, and the
reminder is smaller than NGX_SSL_BUFSIZE, then SSL_write() fails with
SSL_ERROR_WANT_WRITE. The reminder of payload however is successfully
copied to the low-level buffer and all the output chain buffers are
flushed. This means that retry logic doesn't work because
ngx_http_upstream_process_non_buffered_request() checks only if there's
anything in the output chain buffers and ignores the fact that something
may be buffered in low-level parts of the stack.
Signed-off-by: Patryk Lesiewicz <patryk@google.com>
|
|
|
|
|
|
|
| |
Upgrading an upstream connection is usually followed by reading from the client
which a subrequest is not allowed to do. Moreover, accessing the header_in
request field while processing upgraded connection ends up with a null pointer
dereference since the header_in buffer is only created for the the main request.
|
|
|
|
|
|
| |
If proxy_next_upstream includes http_503/http_504, and upstream
returns 503/504, $upstream_status converted this to 502 for any
values except the last one.
|
|
|
|
|
|
|
|
|
|
| |
The NGX_DONE value returned from ngx_http_upstream_cache_send() indicates
that upstream was already finalized in ngx_http_upstream_process_headers().
It was treated as a generic error which resulted in duplicate finalization.
Handled NGX_HTTP_UPSTREAM_INVALID_HEADER from ngx_http_upstream_cache_send().
Previously, it could return within ngx_http_upstream_finalize_request(), and
since it's below NGX_HTTP_SPECIAL_RESPONSE, a client connection could stuck.
|
|
|
|
|
|
|
|
|
| |
When parsing of headers in a cache file fails, already parsed headers
need to be cleared, and protocol state needs to be reinitialized. To do
so, u->request_sent is now set to ensure ngx_http_upstream_reinit() will
be called.
This change complements improvements in 46ddff109e72.
|
|
|
|
|
|
|
|
| |
When caching intercepted errors, previous behaviour was to use
proxy_cache_valid times specified, regardless of various cache control
headers present in the response. Fix is to check u->cacheable and
use u->cache->valid_sec as set by various cache control response headers,
similar to how we do this in the normal caching code path.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If cache file is truncated, it is possible that u->process_header()
will return NGX_AGAIN. Added appropriate handling of this case by
changing the error to NGX_HTTP_UPSTREAM_INVALID_HEADER.
Also, added appropriate logging of this and NGX_HTTP_UPSTREAM_INVALID_HEADER
cases at the "crit" level. Note that this will result in duplicate logging
in case of NGX_HTTP_UPSTREAM_INVALID_HEADER. While this is something better
to avoid, it is considered to be an overkill to implement cache-specific
error logging in u->process_header().
Additionally, u->buffer.start is now reset to be able to receive a new
response, and u->cache_status set to MISS to provide the value in the
$upstream_cache_status variable, much like it happens on other cache file
errors detected by ngx_http_file_cache_read(), instead of HIT, which is
believed to be misleading.
|
|
|
|
|
|
|
|
|
| |
This fixes at least the following cases, where no last_modified_time
(assuming caching is not enabled) resulted in incorrect behaviour:
- slice filter and If-Range requests (ticket #1357);
- If-Range requests with proxy_force_ranges;
- expires modified.
|
|
|
|
| |
No functional changes.
|
|
|
|
|
|
|
|
|
|
| |
The new request flag "preserve_body" indicates that the request body file should
not be removed by the upstream module because it may be used later by a
subrequest. The flag is set by the SSI (ticket #585), addition and slice
modules. Additionally, it is also set by the upstream module when a background
cache update subrequest is started to prevent the request body file removal
after an internal redirect. Only the main request is now allowed to remove the
file.
|
|
|
|
|
| |
This also fixes potential undefined behaviour in the range and slice filter
modules, caused by local overflows of signed integers in expressions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change reworks 13a5f4765887 to only run posted requests once,
with nothing on stack. Running posted requests with other request
functions on stack may result in use-after-free in case of errors,
similar to the one reported in #788.
To only run posted request once, a separate function was introduced
to be used as ssl handshake handler in c->ssl->handler,
ngx_http_upstream_ssl_handshake_handler(). The ngx_http_run_posted_requests()
is only called in this function, and not in ngx_http_upstream_ssl_handshake()
which may be called directly on stack.
Additionaly, ngx_http_upstream_ssl_handshake_handler() now does appropriate
debug logging of the current subrequest, similar to what is done in other
event handlers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the upstream resolve handler always called
ngx_http_run_posted_requests() to run posted requests after processing the
resolver response. However, if the handler was called directly from the
ngx_resolve_name() function (for example, if the resolver response was cached),
running posted requests from the handler could lead to the following errors:
- If the request was scheduled for termination, it could actually be terminated
in the resolve handler. Upper stack frames could reference the freed request
object in this case.
- If a significant number of requests were posted, and for each of them the
resolve handler was called directly from the ngx_resolve_name() function,
posted requests could be run recursively and lead to stack overflow.
Now ngx_http_run_posted_requests() is only called from asynchronously invoked
resolve handlers.
|
|
|
|
| |
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previously, cache background update might not work as expected, making client
wait for it to complete before receiving the final part of a stale response.
This could happen if the response could not be sent to the client socket in one
filter chain call.
Now background cache update is done in a background subrequest. This type of
subrequest does not block any other subrequests or the main request.
|
|
|
|
|
|
|
|
|
|
| |
If initialization of a header failed for some reason after ngx_list_push(),
leaving the header as is can result in uninitialized memory access by
the header filter or the log module. The fix is to clear partially
initialized headers in case of errors.
For the Cache-Control header, the fix is to postpone pushing
r->headers_out.cache_control until its value is completed.
|
|
|
|
|
|
|
|
| |
This change adds "http_429" parameter to "proxy_next_upstream" for
retrying rate-limited requests, and to "proxy_cache_use_stale" for
serving stale cached responses after being rate-limited.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
|
|
|
|
|
|
|
|
|
|
|
| |
With post_action or subrequests, it is possible that the timer set for
wev->delayed will expire while the active subrequest write event handler
is not ready to handle this. This results in request hangs as observed
with limit_rate / sendfile_max_chunk and post_action (ticket #776) or
subrequests (ticket #1228).
Moving the handling to the connection event handler fixes the hangs observed,
and also slightly simplifies the code.
|
|
|
|
|
|
|
|
|
|
|
| |
If the subrequest is already finalized, the handler set with aio_write
may still be used by sendfile in threads when using range requests
(see also e4c1f5b32868, and the original note in 9fd738b85fad). Calling
already finalized subrequest's r->write_event_handler in practice
results in request hang in some cases.
Fix is to trigger connection event handler if the subrequest was already
finalized.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With "proxy_ignore_client_abort off" (the default), upstream module changes
r->read_event_handler to ngx_http_upstream_rd_check_broken_connection().
If the handler is not cleared during upstream finalization, it can be
triggered later, causing unexpected effects, if, for example, a request
was redirected to a different location using error_page or X-Accel-Redirect.
In particular, it makes "proxy_ignore_client_abort on" non-working after
a redirection in a configuration like this:
location = / {
error_page 502 = /error;
proxy_pass http://127.0.0.1:8082;
}
location /error {
proxy_pass http://127.0.0.1:8083;
proxy_ignore_client_abort on;
}
It is also known to cause segmentation faults with aio used, see
http://mailman.nginx.org/pipermail/nginx-ru/2015-August/056570.html.
Fix is to explicitly set r->read_event_handler to ngx_http_block_reading()
during upstream finalization, similar to how it is done in the request body
reading code and in the limit_req module.
|