| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
| |
When the "pending" value is zero, the "buf" will be right shifted
by the width of its type, which results in undefined behavior.
Found by Coverity (CID 1352150).
|
|
|
|
|
|
|
| |
This reduces the size of headers by over 30% on average.
Based on the patch by Vlad Krasnov:
http://mailman.nginx.org/pipermail/nginx-devel/2015-December/007682.html
|
|
|
|
| |
Based on a patch by Takashi Takizawa.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Changes to NGX_MODULE_V1 and ngx_module_t in 85dea406e18f (1.9.11)
broke all modules written in C++, because ISO C++11 does not allow
conversion from string literal to char *.
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The auto/module script is extended to understand ngx_module_link=DYNAMIC.
When set, it links the module as a shared object rather than statically
into nginx binary. The module can later be loaded using the "load_module"
directive.
New auto/module parameter ngx_module_order allows to define module loading
order in complex cases. By default the order is set based on ngx_module_type.
3rd party modules can be compiled dynamically using the --add-dynamic-module
configure option, which will preset ngx_module_link to "DYNAMIC" before
calling the module config script.
Win32 support is rudimentary, and only works when using MinGW gcc (which
is able to handle exports/imports automatically).
In collaboration with Ruslan Ermilov.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Due to greater priority of the unary plus operator over the ternary operator
the expression didn't work as expected. That might result in one byte less
allocation than needed for the HEADERS frame buffer.
|
| |
|
|
|
|
|
| |
Now it includes not only the received body size,
but the size of headers block as well.
|
| |
|
| |
|
|
|
|
|
| |
Use the original query name in error and debug messages when
processing PTR responses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous code only parsed the first answer, without checking its
type, and required a compressed RR name.
The new code checks the RR type, supports responses with multiple
answers, and doesn't require the RR name to be compressed.
This has a side effect in limited support of CNAME. If a response
includes both CNAME and PTR RRs, like when recursion is enabled on
the server, PTR RR is handled.
Full CNAME support in PTR response is not implemented in this change.
|
|
|
|
| |
Renamed argument in ngx_resolver_process_a() for consistency.
|
| |
|
|
|
|
| |
Found by Coverity (CID 1351175).
|
|
|
|
| |
Resend DNS query over TCP once UDP response came truncated.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a global server balancer was used to assign the next DNS server to
send a query to. That could lead to a non-uniform distribution of servers per
request. A request could be assigned to the same dead server several times in a
row and wait longer for a valid server or even time out without being processed.
Now each query is sent to all servers sequentially in a circle until a
response is received or timeout expires. Initial server for each request is
still globally balanced.
|
|
|
|
| |
They will be used for TCP connections as well.
|
| |
|
| |
|
|
|
|
| |
Previously, the recursion was only limited for cached responses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When several requests were waiting for a response, then after getting
a CNAME response only the last request's context had the name updated.
Contexts of other requests had the wrong name. This name was used by
ngx_resolve_name_done() to find the node to remove the request context
from. When the name was wrong, the request could not be properly
cancelled, its context was freed but stayed linked to the node's waiting
list. This happened e.g. when the first request was aborted or timed
out before the resolving completed. When it completed, this triggered
a use-after-free memory access by calling ctx->handler of already freed
request context. The bug manifests itself by
"could not cancel <name> resolving" alerts in error_log.
When a request was responded with a CNAME, the request context kept
the pointer to the original node's rn->u.cname. If the original node
expired before the resolving timed out or completed with an error,
this would trigger a use-after-free memory access via ctx->name in
ctx->handler().
The fix is to keep ctx->name unmodified. The name from context
is no longer used by ngx_resolve_name_done(). Instead, we now keep
the pointer to resolver node to which this request is linked.
Keeping the original name intact also improves logging.
|
|
|
|
|
|
| |
No functional changes.
This is needed by the following change.
|
|
|
|
|
|
| |
When several requests were waiting for a response, then after getting
a CNAME response only the last request was properly processed, while
others were left waiting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If one or more requests were waiting for a response, then after
getting a CNAME response, the timeout event on the first request
remained active, pointing to the wrong node with an empty
rn->waiting list, and that could cause either null pointer
dereference or use-after-free memory access if this timeout
expired.
If several requests were waiting for a response, and the first
request terminated (e.g., due to client closing a connection),
other requests were left without a timeout and could potentially
wait indefinitely.
This is fixed by introducing per-request independent timeouts.
This change also reverts 954867a2f0a6 and 5004210e8c78.
|
| |
|
|
|
|
|
| |
Setting rb->bufs to NULL is surplus after ngx_http_write_request_body()
has returned NGX_OK.
|
|
|
|
|
|
|
|
|
|
|
| |
If enabled, workers are bound to available CPUs, each worker to once CPU
in order. If there are more workers than available CPUs, remaining are
bound in a loop, starting again from the first available CPU.
The optional mask parameter defines which CPUs are available for automatic
binding.
In collaboration with Vladimir Homutov.
|
|
|
|
|
|
|
| |
Previously, only r->method was changed, resulting in handling of a request
as GET within nginx itself, but not in requests to proxied servers.
See http://mailman.nginx.org/pipermail/nginx/2015-December/049518.html.
|
| |
|
| |
|
| |
|
|
|
|
| |
This was broken in a93345ee8f52 (1.9.8).
|
| |
|
|
|
|
|
| |
This flag makes sub filter flush buffered data and optimizes allocation in copy
filter.
|
|
|
|
|
|
|
| |
With main request buffered, it's possible, that a slice subrequest will send
output before it. For example, while main request is waiting for aio read to
complete, a slice subrequest can start an aio operation as well. The order
in which aio callbacks are called is undetermined.
|
|
|
|
|
|
|
|
| |
Skip SSL_CTX_set_tlsext_servername_callback in case of renegotiation.
Do nothing in SNI callback as in this case it will be supplied with
request in c->data which isn't expected and doesn't work this way.
This was broken by b40af2fd1c16 (1.9.6) with OpenSSL master branch and LibreSSL.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Splits a request into subrequests, each providing a specific range of response.
The variable "$slice_range" must be used to set subrequest range and proper
cache key. The directive "slice" sets slice size.
The following example splits requests into 1-megabyte cacheable subrequests.
server {
listen 8000;
location / {
slice 1m;
proxy_cache cache;
proxy_cache_key $uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_cache_valid 200 206 1h;
proxy_pass http://127.0.0.1:9000;
}
}
|
| |
|
|
|
|
| |
Signed-off-by: Piotr Sikora <piotrsikora@google.com>
|
|
|
|
|
| |
The NGX_PTR_SIZE macro is only needed in preprocessor directives where
it's not possible to use sizeof().
|
| |
|
|
|
|
|
|
|
| |
This is an API change.
The proxy module was modified to not depend on this in 44122bddd9a1.
No known third-party modules seem to depend on this.
|
|
|
|
|
|
|
|
|
| |
Do not assume that space character follows the method name, just pass it
explicitly.
The fuss around it has already proved to be unsafe, see bbdb172f0927 and
http://mailman.nginx.org/pipermail/nginx-ru/2013-January/049692.html for
details.
|