| Commit message (Collapse) | Author | Age |
... | |
| |
|
| |
|
|
|
|
|
| |
By default we still send requests using HTTP/1.0. This may be changed with
new proxy_http_version directive.
|
|
|
|
|
| |
Once we know protocol version, set u->headers_in.connection_close to indicate
implicitly assumed connection close with HTTP before 1.1.
|
| |
|
| |
|
|
|
|
|
|
| |
By default follow the old behaviour, i.e. FASTCGI_KEEP_CONN flag isn't set
in request and application is responsible for closing connection once request
is done. To keep connections alive fastcgi_keep_conn must be activated.
|
| |
|
|
|
|
|
| |
This patch introduces r->upstream->keepalive flag, which is set by protocol
handlers if connection to upstream is in good state and can be kept alive.
|
|
|
|
|
|
|
|
|
|
|
|
| |
As long as ngx_event_pipe() has more data read from upstream than specified
in p->length it's passed to input filter even if buffer isn't yet full. This
allows to process data with known length without relying on connection close
to signal data end.
By default p->length is set to -1 in upstream module, i.e. end of data is
indicated by connection close. To set it from per-protocol handlers upstream
input_filter_init() now called in buffered mode (as well as in
unbuffered mode).
|
|
|
|
|
|
|
|
|
| |
Previous use of size_t may cause wierd effects on 32bit platforms with certain
big responses transferred in unbuffered mode.
Nuke "if (size > u->length)" check as it's not usefull anyway (preread
body data isn't subject to this check) and now requires additional check
for u->length being positive.
|
|
|
|
|
|
|
| |
We no longer use r->headers_out.content_length_n as a primary source of
backend's response length. Instead we parse response length to
u->headers_in.content_length_n and copy to r->headers_out.content_length_n
when needed.
|
|
|
|
|
| |
This is required to support persistent https connections as various ssl
structures are allocated from connection's pool.
|
|
|
|
|
|
|
|
|
|
|
| |
Just doing another connect isn't safe as peer.get() may expect peer.tries
to be strictly positive (this is the case e.g. with round robin with multiple
upstream servers). Increment peer.tries to at least avoid cpu hog in
round robin balancer (with the patch alert will be seen instead).
This is not enough to fully address the problem though, hence TODO. We
should be able to inform balancer that the error wasn't considered fatal
and it may make sense to retry the same peer.
|
|
|
|
|
|
| |
The ngx_chain_update_chains() needs pool to free chain links used for buffers
with non-matching tags. Providing one helps to reduce memory consumption
for long-lived requests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were 2 buffers allocated on each buffer chain sent through chunked
filter (one buffer for chunk size, another one for trailing CRLF, about
120 bytes in total on 32-bit platforms). This resulted in large memory
consumption with long-lived requests sending many buffer chains. Usual
example of problematic scenario is streaming though proxy with
proxy_buffering set to off.
Introduced buffers reuse reduces memory consumption in the above problematic
scenario.
See here for initial report:
http://mailman.nginx.org/pipermail/nginx/2010-April/019814.html
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
is still worthwhile.
|
|
|
|
|
|
|
|
|
| |
If file inode was not changed, cached file information was not updated
on retest. As a result stale information might be cached forever if file
attributes was changed and/or file was extended.
This fix also makes obsolete r4077 change of is_directio flag handling,
since this flag is updated together with other file information.
|
| |
|
| |
|
|
|
|
|
| |
in favour of their CommonCrypto library. This change adds a work-around
that allows nginx to still be built on Lion with OpenSSL.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
On file retest open_file_cache lost is_directio if file wasn't changed.
This caused unaligned operations under Linux to fail with EINVAL.
It wasn't noticeable with AIO though, as errors wasn't properly logged.
|
| |
|
|
|
|
|
|
| |
Read event should be blocked after reading body, else undefined behaviour
might occur on additional client activity. This fixes segmentation faults
observed with proxy_ignore_client_abort set.
|
|
|
|
|
|
|
|
|
|
|
| |
Setting read->eof to 0 seems to be just a typo. It appeared in
nginx-0.0.1-2003-10-28-18:45:41 import (r164), while identical code in
ngx_recv.c introduced in the same import do actually set read->eof to 1.
Failure to set read->eof to 1 results in EOF not being generally detectable
from connection flags. On the other hand, kqueue won't report any read
events on such a connection since we use EV_CLEAR. This resulted in read
timeouts if such connection was cached and used for another request.
|
|
|
|
|
|
|
| |
If connection has unsent alerts, SSL_shutdown() tries to send them even
if SSL_set_shutdown(SSL_RECEIVED_SHUTDOWN|SSL_SENT_SHUTDOWN) was used.
This can be prevented by SSL_set_quiet_shutdown(). SSL_set_shutdown()
is required nevertheless to preserve session.
|
|
|
|
| |
nginx disables ranges and returns just the source response.
|
|
|
|
|
|
| |
"max_ranges 0" disables ranges support at all,
"max_ranges 1" allows the single range, etc.
By default number of ranges is unlimited, to be precise, 2^31-1.
|
| |
|
|
|
|
|
| |
*) optimization: start value may be tested against end value only,
since end value here may not be greater than content_length.
|
|
|
|
| |
was not properly skipped. The bug has been introduced in r4057.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
then nginx disables ranges and returns just the source response.
This fix should not affect well-behaving applications but will defeat
DoS attempts exploiting malicious byte ranges.
|