aboutsummaryrefslogtreecommitdiff
path: root/src/http/ngx_http_upstream.c
Commit message (Collapse)AuthorAge
...
* FastCGI: protection from responses with wrong length.Maxim Dounin2020-07-06
| | | | | | | | | | | | | | | Previous behaviour was to pass everything to the client, but this seems to be suboptimal and causes issues (ticket #1695). Fix is to drop extra data instead, as it naturally happens in most clients. Additionally, we now also issue a warning if the response is too short, and make sure the fact it is truncated is propagated to the client. The u->error flag is introduced to make it possible to propagate the error to the client in case of unbuffered proxying. For responses to HEAD requests there is an exception: we do allow both responses without body and responses with body matching the Content-Length header.
* Upstream: drop extra data sent by upstream.Maxim Dounin2020-07-06
| | | | | | | | | | | | | | | | | | Previous behaviour was to pass everything to the client, but this seems to be suboptimal and causes issues (ticket #1695). Fix is to drop extra data instead, as it naturally happens in most clients. This change covers generic buffered and unbuffered filters as used in the scgi and uwsgi modules. Appropriate input filter init handlers are provided by the scgi and uwsgi modules to set corresponding lengths. Note that for responses to HEAD requests there is an exception: we do allow any response length. This is because responses to HEAD requests might be actual full responses, and it is up to nginx to remove the response body. If caching is enabled, only full responses matching the Content-Length header will be cached (see b779728b180c).
* Upstream: jump out of loop after matching the status code.Jinhua Tan2020-05-13
|
* Upstream: fixed EOF handling in unbuffered and upgraded modes.Maxim Dounin2019-07-18
| | | | | | | | With level-triggered event methods it is important to specify the NGX_CLOSE_EVENT flag to ngx_handle_read_event(), otherwise the event won't be removed, resulting in CPU hog. Reported by Patrick Wollgast.
* Upstream: background cache update before cache send (ticket #1782).Roman Arutyunyan2019-06-03
| | | | | | | | | | | | | | | | | | | | | | In case of filter finalization, essential request fields like r->uri, r->args etc could be changed, which affected the cache update subrequest. Also, after filter finalization r->cache could be set to NULL, leading to null pointer dereference in ngx_http_upstream_cache_background_update(). The fix is to create background cache update subrequest before sending the cached response. Since initial introduction in 1aeaae6e9446 (1.11.10) background cache update subrequest was created after sending the cached response because otherwise it blocked the parent request output. In 9552758a786e (1.13.1) background subrequests were introduced to eliminate the delay before sending the final part of the cached response. This also made it possible to create the background cache update subrequest before sending the response. Note that creating the subrequest earlier does not change the fact that in case of filter finalization the background cache update subrequest will likely not have enough time to successfully update the cache entry. Filter finalization leads to the main request termination as soon the current iteration of request processing is complete.
* Variables support in limit_rate and limit_rate_after (ticket #293).Ruslan Ermilov2019-04-24
|
* Upstream: fixed logging of required buffer size (ticket #1722).Chanhun Jeong2019-02-11
|
* Upstream: implemented $upstream_bytes_sent.Ruslan Ermilov2018-12-13
|
* Upstream: revised upstream response time variables.Vladimir Homutov2018-11-21
| | | | | | | | | Variables now do not depend on presence of the HTTP status code in response. If the corresponding event occurred, variables contain time between request creation and the event, and "-" otherwise. Previously, intermediate value of the $upstream_response_time variable held unix timestamp.
* Upstream: proxy_socket_keepalive and friends.Vladimir Homutov2018-10-03
| | | | | The directives enable the use of the SO_KEEPALIVE option on upstream connections. By default, the value is left unchanged.
* Upstream: fixed request chain traversal (ticket #1618).Vladimir Homutov2018-08-24
| | | | | | | | The problem does not manifest itself currently, because in case of non-buffered reading, chain link created by u->create_request method consists of a single element. Found by PVS-Studio.
* Upstream keepalive: keepalive_requests directive.Maxim Dounin2018-08-10
| | | | | | | | | | | The directive configures maximum number of requests allowed on a connection kept in the cache. Once a connection reaches the number of requests configured, it is no longer saved to the cache. The default is 100. Much like keepalive_requests for client connections, this is mostly a safeguard to make sure connections are closed periodically and the memory allocated from the connection pool is freed.
* SSL: save sessions for upstream peers using a callback function.Sergey Kandaurov2018-07-17
| | | | | | | | | | | | | In TLSv1.3, NewSessionTicket messages arrive after the handshake and can come at any time. Therefore we use a callback to save the session when we know about it. This approach works for < TLSv1.3 as well. The callback function is set once per location on merge phase. Since SSL_get_session() in BoringSSL returns an unresumable session for TLSv1.3, peer save_session() methods have been updated as well to use a session supplied within the callback. To preserve API, the session is cached in c->ssl->session. It is preferably accessed in save_session() methods by ngx_ssl_get_session() and ngx_ssl_get0_session() wrappers.
* Upstream: fixed tcp_nopush with gRPC.Maxim Dounin2018-07-02
| | | | | | | | | | | | | | With gRPC it is possible that a request sending is blocked due to flow control. Moreover, further sending might be only allowed once the backend sees all the data we've already sent. With such a backend it is required to clear the TCP_NOPUSH socket option to make sure all the data we've sent are actually delivered to the backend. As such, we now clear TCP_NOPUSH in ngx_http_upstream_send_request() also on NGX_AGAIN if c->write->ready is set. This fixes a test (which waits for all the 64k bytes as per initial window before allowing more bytes) with sendfile enabled when the body was written to a file in a different context.
* Upstream: fixed unexpected tcp_nopush usage on peer connections.Maxim Dounin2018-07-02
| | | | | | | Now tcp_nopush on peer connections is disabled if it is disabled on the client connection, similar to how we handle c->sendfile. Previously, tcp_nopush was always used on upstream connections, regardless of the "tcp_nopush" directive.
* Upstream: disable body cleanup with preserve_output (ticket #1565).Maxim Dounin2018-06-13
| | | | | | | | With u->conf->preserve_output set the request body file might be used after the response header is sent, so avoid cleaning it. (Normally this is not a problem as u->conf->preserve_output is only set with r->request_body_no_buffering, but the request body might be already written to a file in a different context.)
* Upstream: fixed u->conf->preserve_output (ticket #1519).Maxim Dounin2018-04-05
| | | | | | | | | Previously, ngx_http_upstream_process_header() might be called after we've finished reading response headers and switched to a different read event handler, leading to errors with gRPC proxying. Additionally, the u->conf->read_timeout timer might be re-armed during reading response headers (while this is expected to be a single timeout on reading the whole response header).
* Upstream: fixed ngx_http_upstream_test_next() conditions.Maxim Dounin2018-04-03
| | | | | | | | | | | Previously, ngx_http_upstream_test_next() used an outdated condition on whether it will be possible to switch to a different server or not. It did not take into account restrictions on non-idempotent requests, requests with non-buffered request body, and the next upstream timeout. For such requests, switching to the next upstream server was rejected later in ngx_http_upstream_next(), resulting in nginx own error page being returned instead of the original upstream response.
* Fixed checking ngx_tcp_push() and ngx_tcp_nopush() return values.Ruslan Ermilov2018-03-19
| | | | No functional changes.
* Upstream: u->conf->preserve_output flag.Maxim Dounin2018-03-17
| | | | | | | The flag can be used to continue sending request body even after we've got a response from the backend. In particular, this is needed for gRPC proxying of bidirectional streaming RPCs, and also to send control frames in other forms of RPCs.
* Upstream: u->request_body_blocked flag.Maxim Dounin2018-03-17
| | | | | | | | The flag indicates whether last ngx_output_chain() returned NGX_AGAIN or not. If the flag is set, we arm the u->conf->send_timeout timer. The flag complements c->write->ready test, and allows to stop sending the request body in an output filter due to protocol-specific flow control.
* Upstream: trailers support, u->conf->pass_trailers flag.Maxim Dounin2018-03-17
| | | | | | | | | | Basic trailer headers support allows one to access response trailers via the $upstream_trailer_* variables. Additionally, the u->conf->pass_trailers flag was introduced. When the flag is set, trailer headers from the upstream response are passed to the client. Like normal headers, trailer headers will be hidden if present in u->conf->hide_headers_hash.
* Generic subrequests in memory.Roman Arutyunyan2018-02-28
| | | | | | | | | | | Previously, only the upstream response body could be accessed with the NGX_HTTP_SUBREQUEST_IN_MEMORY feature. Now any response body from a subrequest can be saved in a memory buffer. It is available as a single buffer in r->out and the buffer size is configured by the subrequest_output_buffer_size directive. Upstream, proxy and fastcgi code used to handle the old-style feature is removed.
* Basic support of the Link response header.Ruslan Ermilov2018-02-08
|
* Upstream: removed X-Powered-By from the list of special headers.Ruslan Ermilov2018-01-30
| | | | | After 1e720b0be7ec, it's neither specially processed nor copied when redirecting with X-Accel-Redirect.
* Upstream: fixed "header already sent" alerts on backend errors.Maxim Dounin2018-01-11
| | | | | | | | | | | | | | | | | | | | | Following ad3f342f14ba046c (1.9.13), it is possible that a request where header was already sent will be finalized with NGX_HTTP_BAD_GATEWAY, triggering an attempt to return additional error response and the "header already sent" alert as a result. In particular, it is trivial to reproduce the problem with a HEAD request and caching enabled. With caching enabled nginx will change HEAD to GET and will set u->pipe->downstream_error to suppress sending the response body to the client. When a backend-related error occurs (for example, proxy_read_timeout expires), ngx_http_finalize_upstream_request() will be called with NGX_HTTP_BAD_GATEWAY. After ad3f342f14ba046c this will result in ngx_http_finalize_request(NGX_HTTP_BAD_GATEWAY). Fix is to move u->pipe->downstream_error handling to a later point, where all special response codes are changed to NGX_ERROR. Reported by Jan Prachar, http://mailman.nginx.org/pipermail/nginx-devel/2018-January/010737.html.
* Retain CAP_NET_RAW capability for transparent proxying.Roman Arutyunyan2017-12-13
| | | | | | | | The capability is retained automatically in unprivileged worker processes after changing UID if transparent proxying is enabled at least once in nginx configuration. The feature is only available in Linux.
* Upstream: flush low-level buffers on write retry.Patryk Lesiewicz2017-12-01
| | | | | | | | | | | | | If the data to write is bigger than what the socket can send, and the reminder is smaller than NGX_SSL_BUFSIZE, then SSL_write() fails with SSL_ERROR_WANT_WRITE. The reminder of payload however is successfully copied to the low-level buffer and all the output chain buffers are flushed. This means that retry logic doesn't work because ngx_http_upstream_process_non_buffered_request() checks only if there's anything in the output chain buffers and ignores the fact that something may be buffered in low-level parts of the stack. Signed-off-by: Patryk Lesiewicz <patryk@google.com>
* Upstream: disabled upgrading in subrequests.Roman Arutyunyan2017-10-11
| | | | | | | Upgrading an upstream connection is usually followed by reading from the client which a subrequest is not allowed to do. Moreover, accessing the header_in request field while processing upgraded connection ends up with a null pointer dereference since the header_in buffer is only created for the the main request.
* Upstream: fixed $upstream_status when upstream returns 503/504.Ruslan Ermilov2017-10-11
| | | | | | If proxy_next_upstream includes http_503/http_504, and upstream returns 503/504, $upstream_status converted this to 502 for any values except the last one.
* Upstream: fixed error handling of stale and revalidated cache send.Sergey Kandaurov2017-10-10
| | | | | | | | | | The NGX_DONE value returned from ngx_http_upstream_cache_send() indicates that upstream was already finalized in ngx_http_upstream_process_headers(). It was treated as a generic error which resulted in duplicate finalization. Handled NGX_HTTP_UPSTREAM_INVALID_HEADER from ngx_http_upstream_cache_send(). Previously, it could return within ngx_http_upstream_finalize_request(), and since it's below NGX_HTTP_SPECIAL_RESPONSE, a client connection could stuck.
* Upstream: even better handling of invalid headers in cache files.Maxim Dounin2017-10-09
| | | | | | | | | When parsing of headers in a cache file fails, already parsed headers need to be cleared, and protocol state needs to be reinitialized. To do so, u->request_sent is now set to ensure ngx_http_upstream_reinit() will be called. This change complements improvements in 46ddff109e72.
* Cache: fixed caching of intercepted errors (ticket #1382).Maxim Dounin2017-10-03
| | | | | | | | When caching intercepted errors, previous behaviour was to use proxy_cache_valid times specified, regardless of various cache control headers present in the response. Fix is to check u->cacheable and use u->cache->valid_sec as set by various cache control response headers, similar to how we do this in the normal caching code path.
* Upstream: better handling of invalid headers in cache files.Maxim Dounin2017-10-02
| | | | | | | | | | | | | | | | | | If cache file is truncated, it is possible that u->process_header() will return NGX_AGAIN. Added appropriate handling of this case by changing the error to NGX_HTTP_UPSTREAM_INVALID_HEADER. Also, added appropriate logging of this and NGX_HTTP_UPSTREAM_INVALID_HEADER cases at the "crit" level. Note that this will result in duplicate logging in case of NGX_HTTP_UPSTREAM_INVALID_HEADER. While this is something better to avoid, it is considered to be an overkill to implement cache-specific error logging in u->process_header(). Additionally, u->buffer.start is now reset to be able to receive a new response, and u->cache_status set to MISS to provide the value in the $upstream_cache_status variable, much like it happens on other cache file errors detected by ngx_http_file_cache_read(), instead of HIT, which is believed to be misleading.
* Upstream: unconditional parsing of last_modified_time.Maxim Dounin2017-08-23
| | | | | | | | | This fixes at least the following cases, where no last_modified_time (assuming caching is not enabled) resulted in incorrect behaviour: - slice filter and If-Range requests (ticket #1357); - If-Range requests with proxy_force_ranges; - expires modified.
* Variables: macros for null variables.Ruslan Ermilov2017-08-01
| | | | No functional changes.
* Upstream: keep request body file from removal if requested.Roman Arutyunyan2017-07-19
| | | | | | | | | | The new request flag "preserve_body" indicates that the request body file should not be removed by the upstream module because it may be used later by a subrequest. The flag is set by the SSI (ticket #585), addition and slice modules. Additionally, it is also set by the upstream module when a background cache update subrequest is started to prevent the request body file removal after an internal redirect. Only the main request is now allowed to remove the file.
* Parenthesized ASCII-related calculations.Valentin Bartenev2017-07-17
| | | | | This also fixes potential undefined behaviour in the range and slice filter modules, caused by local overflows of signed integers in expressions.
* Upstream: introduced ngx_http_upstream_ssl_handshake_handler().Maxim Dounin2017-06-22
| | | | | | | | | | | | | | | | | This change reworks 13a5f4765887 to only run posted requests once, with nothing on stack. Running posted requests with other request functions on stack may result in use-after-free in case of errors, similar to the one reported in #788. To only run posted request once, a separate function was introduced to be used as ssl handshake handler in c->ssl->handler, ngx_http_upstream_ssl_handshake_handler(). The ngx_http_run_posted_requests() is only called in this function, and not in ngx_http_upstream_ssl_handshake() which may be called directly on stack. Additionaly, ngx_http_upstream_ssl_handshake_handler() now does appropriate debug logging of the current subrequest, similar to what is done in other event handlers.
* Upstream: fixed running posted requests (ticket #788).Roman Arutyunyan2017-06-14
| | | | | | | | | | | | | | | | | | | Previously, the upstream resolve handler always called ngx_http_run_posted_requests() to run posted requests after processing the resolver response. However, if the handler was called directly from the ngx_resolve_name() function (for example, if the resolver response was cached), running posted requests from the handler could lead to the following errors: - If the request was scheduled for termination, it could actually be terminated in the resolve handler. Upper stack frames could reference the freed request object in this case. - If a significant number of requests were posted, and for each of them the resolve handler was called directly from the ngx_resolve_name() function, posted requests could be run recursively and lead to stack overflow. Now ngx_http_run_posted_requests() is only called from asynchronously invoked resolve handlers.
* Upstream: style.Piotr Sikora2017-05-31
| | | | Signed-off-by: Piotr Sikora <piotrsikora@google.com>
* Introduced ngx_tcp_nodelay().Ruslan Ermilov2017-05-26
|
* Background subrequests for cache updates.Roman Arutyunyan2017-05-25
| | | | | | | | | | Previously, cache background update might not work as expected, making client wait for it to complete before receiving the final part of a stale response. This could happen if the response could not be sent to the client socket in one filter chain call. Now background cache update is done in a background subrequest. This type of subrequest does not block any other subrequests or the main request.
* Cleaned up r->headers_out.headers allocation error handling.Sergey Kandaurov2017-04-20
| | | | | | | | | | If initialization of a header failed for some reason after ngx_list_push(), leaving the header as is can result in uninitialized memory access by the header filter or the log module. The fix is to clear partially initialized headers in case of errors. For the Cache-Control header, the fix is to postpone pushing r->headers_out.cache_control until its value is completed.
* Upstream: allow recovery from "429 Too Many Requests" response.Piotr Sikora2017-03-24
| | | | | | | | This change adds "http_429" parameter to "proxy_next_upstream" for retrying rate-limited requests, and to "proxy_cache_use_stale" for serving stale cached responses after being rate-limited. Signed-off-by: Piotr Sikora <piotrsikora@google.com>
* Moved handling of wev->delayed to the connection event handler.Maxim Dounin2017-04-02
| | | | | | | | | | | With post_action or subrequests, it is possible that the timer set for wev->delayed will expire while the active subrequest write event handler is not ready to handle this. This results in request hangs as observed with limit_rate / sendfile_max_chunk and post_action (ticket #776) or subrequests (ticket #1228). Moving the handling to the connection event handler fixes the hangs observed, and also slightly simplifies the code.
* Threads: fixed request hang with aio_write and subrequests.Maxim Dounin2017-03-28
| | | | | | | | | | | If the subrequest is already finalized, the handler set with aio_write may still be used by sendfile in threads when using range requests (see also e4c1f5b32868, and the original note in 9fd738b85fad). Calling already finalized subrequest's r->write_event_handler in practice results in request hang in some cases. Fix is to trigger connection event handler if the subrequest was already finalized.
* Added missing "static" specifiers found by gcc -Wtraditional.Ruslan Ermilov2017-03-06
|
* Added missing static specifiers.Eran Kornblau2017-03-02
|
* Upstream: read handler cleared on upstream finalization.Maxim Dounin2017-02-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | With "proxy_ignore_client_abort off" (the default), upstream module changes r->read_event_handler to ngx_http_upstream_rd_check_broken_connection(). If the handler is not cleared during upstream finalization, it can be triggered later, causing unexpected effects, if, for example, a request was redirected to a different location using error_page or X-Accel-Redirect. In particular, it makes "proxy_ignore_client_abort on" non-working after a redirection in a configuration like this: location = / { error_page 502 = /error; proxy_pass http://127.0.0.1:8082; } location /error { proxy_pass http://127.0.0.1:8083; proxy_ignore_client_abort on; } It is also known to cause segmentation faults with aio used, see http://mailman.nginx.org/pipermail/nginx-ru/2015-August/056570.html. Fix is to explicitly set r->read_event_handler to ngx_http_block_reading() during upstream finalization, similar to how it is done in the request body reading code and in the limit_req module.