aboutsummaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAge
...
* OCSP: fixed certificate reference leak.Sergey Kandaurov2020-07-23
|
* Xslt: disabled ranges.Roman Arutyunyan2020-07-22
| | | | | | | | | | | | | | Previously, the document generated by the xslt filter was always fully sent to client even if a range was requested and response status was 206 with appropriate Content-Range. The xslt module is unable to serve a range because of suspending the header filter chain. By the moment full response xml is buffered by the xslt filter, range header filter is not called yet, but the range body filter has already been called and did nothing. The fix is to disable ranges by resetting the r->allow_ranges flag much like the image filter that employs a similar technique.
* Core: close PID file when writing fails.Ruslan Ermilov2020-07-21
| | | | Reported by Jinhua Tan.
* Slice filter: clear original Accept-Ranges.Roman Arutyunyan2020-07-09
| | | | | | | | | | | | The slice filter allows ranges for the response by setting the r->allow_ranges flag, which enables the range filter. If the range was not requested, the range filter adds an Accept-Ranges header to the response to signal the support for ranges. Previously, if an Accept-Ranges header was already present in the first slice response, client received two copies of this header. Now, the slice filter removes the Accept-Ranges header from the response prior to setting the r->allow_ranges flag.
* Version bump.Roman Arutyunyan2020-07-09
|
* gRPC: generate error when response size is wrong.Maxim Dounin2020-07-06
| | | | | | | | | | | | | | | | | | | | As long as the "Content-Length" header is given, we now make sure it exactly matches the size of the response. If it doesn't, the response is considered malformed and must not be forwarded (https://tools.ietf.org/html/rfc7540#section-8.1.2.6). While it is not really possible to "not forward" the response which is already being forwarded, we generate an error instead, which is the closest equivalent. Previous behaviour was to pass everything to the client, but this seems to be suboptimal and causes issues (ticket #1695). Also this directly contradicts HTTP/2 specification requirements. Note that the new behaviour for the gRPC proxy is more strict than that applied in other variants of proxying. This is intentional, as HTTP/2 specification requires us to do so, while in other types of proxying malformed responses from backends are well known and historically tolerated.
* FastCGI: protection from responses with wrong length.Maxim Dounin2020-07-06
| | | | | | | | | | | | | | | Previous behaviour was to pass everything to the client, but this seems to be suboptimal and causes issues (ticket #1695). Fix is to drop extra data instead, as it naturally happens in most clients. Additionally, we now also issue a warning if the response is too short, and make sure the fact it is truncated is propagated to the client. The u->error flag is introduced to make it possible to propagate the error to the client in case of unbuffered proxying. For responses to HEAD requests there is an exception: we do allow both responses without body and responses with body matching the Content-Length header.
* Upstream: drop extra data sent by upstream.Maxim Dounin2020-07-06
| | | | | | | | | | | | | | | | | | Previous behaviour was to pass everything to the client, but this seems to be suboptimal and causes issues (ticket #1695). Fix is to drop extra data instead, as it naturally happens in most clients. This change covers generic buffered and unbuffered filters as used in the scgi and uwsgi modules. Appropriate input filter init handlers are provided by the scgi and uwsgi modules to set corresponding lengths. Note that for responses to HEAD requests there is an exception: we do allow any response length. This is because responses to HEAD requests might be actual full responses, and it is up to nginx to remove the response body. If caching is enabled, only full responses matching the Content-Length header will be cached (see b779728b180c).
* Proxy: style.Maxim Dounin2020-07-06
|
* Proxy: detection of data after final chunk.Maxim Dounin2020-07-06
| | | | | | | | | Previously, additional data after final chunk was either ignored (in the same buffer, or during unbuffered proxying) or sent to the client (in the next buffer already if it was already read from the socket). Now additional data are properly detected and ignored in all cases. Additionally, a warning is now logged and keepalive is disabled in the connection.
* Proxy: drop extra data sent by upstream.Maxim Dounin2020-07-06
| | | | | | Previous behaviour was to pass everything to the client, but this seems to be suboptimal and causes issues (ticket #1695). Fix is to drop extra data instead, as it naturally happens in most clients.
* Memcached: protect from too long responses.Maxim Dounin2020-07-06
| | | | | | | | | | | If a memcached response was followed by a correct trailer, and then the NUL character followed by some extra data - this was accepted by the trailer checking code. This in turn resulted in ctx->rest underflow and caused negative size buffer on the next reading from the upstream, followed by the "negative size buf in writer" alert. Fix is to always check for too long responses, so a correct trailer cannot be followed by extra data.
* HTTP/2: lingering close after GOAWAY.Ruslan Ermilov2020-07-03
| | | | | | | | | | | | | | After sending the GOAWAY frame, a connection is now closed using the lingering close mechanism. This allows for the reliable delivery of the GOAWAY frames, while also fixing connection resets observed when http2_max_requests is reached (ticket #1250), or with graceful shutdown (ticket #1544), when some additional data from the client is received on a fully closed connection. For HTTP/2, the settings lingering_close, lingering_timeout, and lingering_time are taken from the "server" level.
* SSL: fixed unexpected certificate requests (ticket #2008).Maxim Dounin2020-06-29
| | | | | | | | | | | | Using SSL_CTX_set_verify(SSL_VERIFY_PEER) implies that OpenSSL will send a certificate request during an SSL handshake, leading to unexpected certificate requests from browsers as long as there are any client certificates installed. Given that ngx_ssl_trusted_certificate() is called unconditionally by the ngx_http_ssl_module, this affected all HTTPS servers. Broken by 699f6e55bbb4 (not released yet). Fix is to set verify callback in the ngx_ssl_trusted_certificate() function without changing the verify mode.
* Fixed potential leak of temp pool.Eran Kornblau2020-06-15
| | | | | In case ngx_hash_add_key() fails, need to goto failed instead of returning, so that temp_pool will be destoryed.
* Cache: introduced min_free cache clearing.Maxim Dounin2020-06-22
| | | | | | | | | | | Clearing cache based on free space left on a file system is expected to allow better disk utilization in some cases, notably when disk space might be also used for something other than nginx cache (including nginx own temporary files) and while loading cache (when cache size might be inaccurate for a while, effectively disabling max_size cache clearing). Based on a patch by Adam Bambuch.
* Too large st_blocks values are now ignored (ticket #157).Maxim Dounin2020-06-22
| | | | | | | | | | | | | | With XFS, using "allocsize=64m" mount option results in large preallocation being reported in the st_blocks as returned by fstat() till the file is closed. This in turn results in incorrect cache size calculations and wrong clearing based on max_size. To avoid too aggressive cache clearing on such volumes, st_blocks values which result in sizes larger than st_size and eight blocks (an arbitrary limit) are no longer trusted, and we use st_size instead. The ngx_de_fs_size() counterpart is intentionally not modified, as it is used on closed files and hence not affected by this problem.
* Large block sizes on Linux are now ignored (ticket #1168).Maxim Dounin2020-06-22
| | | | | | | | | | | | | | | | | | NFS on Linux is known to report wsize as a block size (in both f_bsize and f_frsize, both in statfs() and statvfs()). On the other hand, typical file system block sizes on Linux (ext2/ext3/ext4, XFS) are limited to pagesize. (With FAT, block sizes can be at least up to 512k in extreme cases, but this doesn't really matter, see below.) To avoid too aggressive cache clearing on NFS volumes on Linux, block sizes larger than pagesize are now ignored. Note that it is safe to ignore large block sizes. Since 3899:e7cd13b7f759 (1.0.1) cache size is calculated based on fstat() st_blocks, and rounding to file system block size is preserved mostly for Windows. Note well that on other OSes valid block sizes seen are at least up to 65536. In particular, UFS on FreeBSD is known to work well with block and fragment sizes set to 65536.
* OCSP: fixed use-after-free on error.Roman Arutyunyan2020-06-15
| | | | | | | | | | | | | | When validating second and further certificates, ssl callback could be called twice to report the error. After the first call client connection is terminated and its memory is released. Prior to the second call and in it released connection memory is accessed. Errors triggering this behavior: - failure to create the request - failure to start resolving OCSP responder name - failure to start connecting to the OCSP responder The fix is to rearrange the code to eliminate the second call.
* Correctly flush request body to uwsgi with SSL.Quantum2020-06-15
| | | | | | | | | | | | The flush flag was not set when forwarding the request body to the uwsgi server. When using uwsgi_pass suwsgi://..., this causes the uwsgi server to wait indefinitely for the request body and eventually time out due to SSL buffering. This is essentially the same change as 4009:3183165283cc, which was made to ngx_http_proxy_module.c. This will fix the uwsgi bug https://github.com/unbit/uwsgi/issues/1490.
* Stream: fixed processing of zero length UDP packets (ticket #1982).Vladimir Homutov2020-06-08
|
* SSL: added verify callback to ngx_ssl_trusted_certificate().Maxim Dounin2020-06-03
| | | | | | This ensures that certificate verification is properly logged to debug log during upstream server certificate verification. This should help with debugging various certificate issues.
* Fixed SIGQUIT not removing listening UNIX sockets (closes #753).Ruslan Ermilov2020-06-01
| | | | | | Listening UNIX sockets were not removed on graceful shutdown, preventing the next runs. The fix is to replace the custom socket closing code in ngx_master_process_cycle() by the ngx_close_listening_sockets() call.
* Fixed removing of listening UNIX sockets when "changing binary".Ruslan Ermilov2020-06-01
| | | | | | When changing binary, sending a SIGTERM to the new binary's master process should not remove inherited UNIX sockets unless the old binary's master process has exited.
* Version bump.Maxim Dounin2020-05-26
|
* HTTP/2: invalid connection preface logging (ticket #1981).Maxim Dounin2020-05-25
| | | | | | | | Previously, invalid connection preface errors were only logged at debug level, providing no visible feedback, in particular, when a plain text HTTP/2 listening socket is erroneously used for HTTP/1.x connections. Now these are explicitly logged at the info level, much like other client-related errors.
* Fixed format specifiers.Sergey Kandaurov2020-05-23
|
* OCSP: certificate status cache.Roman Arutyunyan2020-05-22
| | | | | | | When enabled, certificate status is stored in cache and is used to validate the certificate in future requests. New directive ssl_ocsp_cache is added to configure the cache.
* SSL: client certificate validation with OCSP (ticket #1534).Roman Arutyunyan2020-05-22
| | | | | | | | | OCSP validation for client certificates is enabled by the "ssl_ocsp" directive. OCSP responder can be optionally specified by "ssl_ocsp_responder". When session is reused, peer chain is not available for validation. If the verified chain contains certificates from the peer chain not available at the server, validation will fail.
* OCSP stapling: iterate over all responder addresses.Roman Arutyunyan2020-05-22
| | | | | | | | Previously only the first responder address was used per each stapling update. Now, in case of a network or parsing error, next address is used. This also fixes the issue with unsupported responder address families (ticket #1330).
* OCSP stapling: keep extra chain in the staple object.Roman Arutyunyan2020-05-17
|
* OCSP stapling: moved response verification to a separate function.Roman Arutyunyan2020-05-06
|
* Upstream: jump out of loop after matching the status code.Jinhua Tan2020-05-13
|
* Variables: fixed buffer over-read when evaluating "$arg_".Sergey Kandaurov2020-05-08
|
* gRPC: WINDOW_UPDATE after END_STREAM handling (ticket #1797).Ruslan Ermilov2020-04-23
| | | | | | As per https://tools.ietf.org/html/rfc7540#section-6.9, WINDOW_UPDATE received after a frame with the END_STREAM flag should be handled and not treated as an error.
* gRPC: RST_STREAM(NO_ERROR) handling (ticket #1792).Ruslan Ermilov2020-04-23
| | | | | | | | | | | | | | | | | | | | | | | | | As per https://tools.ietf.org/html/rfc7540#section-8.1, : A server can send a complete response prior to the client : sending an entire request if the response does not depend on : any portion of the request that has not been sent and : received. When this is true, a server MAY request that the : client abort transmission of a request without error by : sending a RST_STREAM with an error code of NO_ERROR after : sending a complete response (i.e., a frame with the : END_STREAM flag). Clients MUST NOT discard responses as a : result of receiving such a RST_STREAM, though clients can : always discard responses at their discretion for other : reasons. Previously, RST_STREAM(NO_ERROR) received from upstream after a frame with the END_STREAM flag was incorrectly treated as an error. Now, a single RST_STREAM(NO_ERROR) is properly handled. This fixes problems observed with modern grpc-c [1], as well as with the Go gRPC module. [1] https://github.com/grpc/grpc/pull/1661
* Version bump.Ruslan Ermilov2020-04-23
|
* The new auth_delay directive for delaying unauthorized requests.Ruslan Ermilov2020-04-08
| | | | | | | | | | | | The request processing is delayed by a timer. Since nginx updates internal time once at the start of each event loop iteration, this normally ensures constant time delay, adding a mitigation from time-based attacks. A notable exception to this is the case when there are no additional events before the timer expires. To ensure constant-time processing in this case as well, we trigger an additional event loop iteration by posting a dummy event for the next event loop iteration.
* Auth basic: explicitly zero out password buffer.Ruslan Ermilov2020-03-13
|
* Version bump.Ruslan Ermilov2020-03-16
|
* Simplified subrequest finalization.Roman Arutyunyan2020-02-28
| | | | | Now it looks similar to what it was before background subrequests were introduced in 9552758a786e.
* Fixed premature background subrequest finalization.Dmitry Volyntsev2020-03-02
| | | | | | | | | | | | | | | When "aio" or "aio threads" is used while processing the response body of an in-memory background subrequest, the subrequest could be finalized with an aio operation still in progress. Upon aio completion either parent request is woken or the old r->write_event_handler is called again. The latter may result in request errors. In either case post_subrequest handler is never called with the full response body, which is typically expected when using in-memory subrequests. Currently in nginx background subrequests are created by the upstream module and the mirror module. The issue does not manifest itself with these subrequests because they are header-only. But it can manifest itself with third-party modules which create in-memory background subrequests.
* Added default overwrite in error_page 494.Maxim Dounin2020-02-28
| | | | | | | | | | | | | | | | | We used to have default error_page overwrite for 495, 496, and 497, so a configuration like error_page 495 /error; will result in error 400, much like without any error_page configured. The 494 status code was introduced later (in 3848:de59ad6bf557, nginx 0.9.4), and relevant changes to ngx_http_core_error_page() were missed, resulting in inconsistent behaviour of "error_page 494" - with error_page configured it results in 494 being returned instead of 400. Reported by Frank Liu, http://mailman.nginx.org/pipermail/nginx/2020-February/058957.html.
* Mp4: fixed possible chunk offset overflow.Roman Arutyunyan2020-02-26
| | | | | | | | | | | | | | | | | | | | In "co64" atom chunk start offset is a 64-bit unsigned integer. When trimming the "mdat" atom, chunk offsets are casted to off_t values which are typically 64-bit signed integers. A specially crafted mp4 file with huge chunk offsets may lead to off_t overflow and result in negative trim boundaries. The consequences of the overflow are: - Incorrect Content-Length header value in the response. - Negative left boundary of the response file buffer holding the trimmed "mdat". This leads to pread()/sendfile() errors followed by closing the client connection. On rare systems where off_t is a 32-bit integer, this scenario is also feasible with the "stco" atom. The fix is to add checks which make sure data chunks referenced by each track are within the mp4 file boundaries. Additionally a few more checks are added to ensure mp4 file consistency and log errors.
* Disabled connection reuse while in SSL handshake.Sergey Kandaurov2020-02-27
| | | | | During SSL handshake, the connection could be reused in the OCSP stapling callback, if configured, which subsequently leads to a segmentation fault.
* Disabled duplicate "Host" headers (ticket #1724).Maxim Dounin2020-02-20
| | | | | | | | | Duplicate "Host" headers were allowed in nginx 0.7.0 (revision b9de93d804ea) as a workaround for some broken Motorola phones which used to generate requests with two "Host" headers[1]. It is believed that this workaround is no longer relevant. [1] http://mailman.nginx.org/pipermail/nginx-ru/2008-May/017845.html
* Removed "Transfer-Encoding: identity" support.Maxim Dounin2020-02-20
| | | | | | The "identity" transfer coding has been removed in RFC 7230. It is believed that it is not used in real life, and at the same time it provides a potential attack vector.
* Disabled multiple Transfer-Encoding headers.Maxim Dounin2020-02-20
| | | | | | | | | We anyway do not support more than one transfer encoding, so accepting requests with multiple Transfer-Encoding headers doesn't make sense. Further, we do not handle multiple headers, and ignore anything but the first header. Reported by Filippo Valsorda.
* Made ngx_http_get_forwarded_addr_internal() non-recursive.Vladimir Homutov2020-02-11
|
* HTTP/2: fixed socket leak with an incomplete HEADERS frame.Sergey Kandaurov2020-02-05
| | | | | | | | | | | A connection could get stuck without timers if a client has partially sent the HEADERS frame such that it was split on the individual header boundary. In this case, it cannot be processed without the rest of the HEADERS frame. The fix is to call ngx_http_v2_state_headers_save() in this case. Normally, it would be called from the ngx_http_v2_state_header_block() handler on the next iteration, when there is not enough data to continue processing. This isn't the case if recv_buffer became empty and there's no more data to read.