| Commit message (Collapse) | Author | Age |
|
|
|
| |
It appears to be a relic from prototype locking removed in b0b7b5a35.
|
|
|
|
|
| |
This makes it easier to understand why sessions may not be saved
in shared memory due to size.
|
|
|
|
|
|
| |
All such transient buffers are converted to the single storage in BSS.
In preparation to raise the limit.
|
|
|
|
| |
The "resolver" and "resolver_timeout" directives can now be specified
directly in the "upstream" block.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifying the upstream server by a hostname together with the
"resolve" parameter will make the hostname to be periodically
resolved, and upstream servers added/removed as necessary.
This requires a "resolver" at the "http" configuration block.
The "resolver_timeout" parameter also affects when the failed
DNS requests will be attempted again. Responses with NXDOMAIN
will be attempted again in 10 seconds.
Upstream has a configuration generation number that is incremented each
time servers are added/removed to the primary/backup list. This number
is remembered by the peer.init method, and if peer.get detects a change
in configuration, it returns NGX_BUSY.
Each server has a reference counter. It is incremented by peer.get and
decremented by peer.free. When a server is removed, it is removed from
the list of servers and is marked as "zombie". The memory allocated by
a zombie peer is freed only when its reference count becomes zero.
Co-authored-by: Roman Arutyunyan <arut@nginx.com>
Co-authored-by: Sergey Kandaurov <pluknet@nginx.com>
Co-authored-by: Vladimir Homutov <vl@nginx.com>
|
|
|
|
|
|
|
| |
Previously, the number of next_upstream tries included servers marked
as "down", resulting in "no live upstreams" with the code 502 instead
of the code derived from an attempt to connect to the last tried "up"
server (ticket #2096).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In TLSv1.3, NewSessionTicket messages arrive after the handshake and
can come at any time. Therefore we use a callback to save the session
when we know about it. This approach works for < TLSv1.3 as well.
The callback function is set once per location on merge phase.
Since SSL_get_session() in BoringSSL returns an unresumable session for
TLSv1.3, peer save_session() methods have been updated as well to use a
session supplied within the callback. To preserve API, the session is
cached in c->ssl->session. It is preferably accessed in save_session()
methods by ngx_ssl_get_session() and ngx_ssl_get0_session() wrappers.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes inconsistency in what is stored in the "host" field.
Normally it would contain the "host" part of the parsed URL
(e.g., proxy_pass with variables), but for the case of an
implicit upstream specified with literal address it contained
the text representation of the socket address (that is, host
including port for IP).
Now the "host" field always contains the "host" part of the URL,
while the text representation of the socket address is stored
in the newly added "name" field.
The ngx_http_upstream_create_round_robin_peer() function was
modified accordingly in a way to be compatible with the code
that does not know about the new "name" field.
The "stream" code was similarly modified except for not adding
compatibility in ngx_stream_upstream_create_round_robin_peer().
This change is also a prerequisite for the next change.
|
|
|
|
|
| |
It is to be used to track version of an upstream configuration used for
request processing.
|
| |
|
|
|
|
| |
Its usefulness it questionable, and it interacts badly with max_conns.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This can be useful to understand why "no live upstreams" happens,
in particular.
|
| |
|
|
|
|
|
| |
Upstreams with the "zone" directive are kept in shared memory,
with a consistent view of all worker processes.
|
| |
|
|
|
|
| |
This is an API change.
|
|
|
|
| |
This also simplifies the implementation of the least_conn module.
|
|
|
|
| |
Since peer.tries is never reset it can now be limited if required.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SSL_SESSION struct is internal part of the OpenSSL library and it's fields
should be accessed via API (when exposed), not directly.
The unfortunate side-effect of this change is that we're losing reference
count that used to be printed at the debug log level, but this seems to be
an acceptable trade-off.
Almost fixes build with -DOPENSSL_NO_SSL_INTERN.
Signed-off-by: Piotr Sikora <piotr@cloudflare.com>
|
|
|
|
| |
No functional changes.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Changed initialization order of the peer structure in one of the
cases to be in line with the rest.
No functional changes.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Due to peer->checked always set since rev. c90801720a0c (1.3.0)
by round-robin and least_conn balancers (ip_hash not affected),
the code in ngx_http_upstream_free_round_robin_peer() function
incorrectly reset peer->fails too often.
Reported by Dmitry Popov,
http://mailman.nginx.org/pipermail/nginx-devel/2013-May/003720.html
|
|
|
|
| |
This functionality is now provided by ngx_http_upstream_keepalive_module.
|
|
|
|
|
|
|
|
| |
Sorting of upstream servers by their weights is not required by
current balancing algorithms.
This will likely change mapping to backends served by ip_hash
weighted upstreams.
|
| |
|
|
|
|
|
|
|
|
|
| |
Upstreams created by "proxy_pass" with IP address and no port were
broken in 1.3.10, by not initializing port in u->sockaddr.
API change: ngx_parse_url() was modified to always initialize port
(in u->sockaddr and in u->port), even for the u->no_resolve case;
ngx_http_upstream() and ngx_http_upstream_add() were adopted.
|
|
|
|
| |
Based on patch by Thomas Chen (ticket #257).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an upstream block was defined with the only server marked as
"down", e.g.
upstream u {
server 127.0.0.1:8080 down;
}
an attempt was made to contact the server despite the "down" flag.
It is believed that immediate 502 response is better in such a
case, and it's also consistent with what is currently done in case
of multiple servers all marked as "down".
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to weight being set to 0 for down peers, order of peers after sorting
wasn't the same as without the "down" flag (with down peers at the end),
resulting in client rebalancing for clients on other servers. The only
rebalancing which should happen after adding "down" to a server is one
for clients on the server.
The problem was introduced in r1377 (which fixed endless loop by setting
weight to 0 for down servers). The loop is no longer possible with new
smooth algorithm, so preserving original weight is safe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For edge case weights like { 5, 1, 1 } we now produce { a, a, b, a, c, a, a }
sequence instead of { c, b, a, a, a, a, a } produced previously.
Algorithm is as follows: on each peer selection we increase current_weight
of each eligible peer by its weight, select peer with greatest current_weight
and reduce its current_weight by total number of weight points distributed
among peers.
In case of { 5, 1, 1 } weights this gives the following sequence of
current_weight's:
a b c
0 0 0 (initial state)
5 1 1 (a selected)
-2 1 1
3 2 2 (a selected)
-4 2 2
1 3 3 (b selected)
1 -4 3
6 -3 4 (a selected)
-1 -3 4
4 -2 5 (c selected)
4 -2 -2
9 -1 -1 (a selected)
2 -1 -1
7 0 0 (a selected)
0 0 0
To preserve weight reduction in case of failures the effective_weight
variable was introduced, which usually matches peer's weight, but is
reduced temporarily on peer failures.
This change also fixes loop with backup servers and proxy_next_upstream
http_404 (ticket #47), and skipping alive upstreams in some cases if there
are multiple dead ones (ticket #64).
|
|
|
|
|
|
|
| |
Such upstreams cause CPU hog later in the code as number of peers isn't
expected to be 0. Currently this may happen either if there are only backup
servers defined in an upstream block, or if server with ipv6 address used
in an upstream block.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously nginx used to mark backend again as live as soon as fail_timeout
passes (10s by default) since last failure. On the other hand, detecting
dead backend takes up to 60s (proxy_connect_timeout) in typical situation
"backend is down and doesn't respond to any packets". This resulted in
suboptimal behaviour in the above situation (up to 23% of requests were
directed to dead backend with default settings).
More detailed description of the problem may be found here (in Russian):
http://mailman.nginx.org/pipermail/nginx-ru/2011-August/042172.html
Fix is to only allow one request after fail_timeout passes, and
mark backend as "live" only if this request succeeds.
Note that with new code backend will not be marked "live" unless "check"
request is completed, and this may take a while in some specific workloads
(e.g. streaming). This is believed to be acceptable.
|
|
|
|
|
|
|
|
| |
Previous allocation only took into account number of non-backup servers, and
this caused memory corruption with many backup servers.
See report here:
http://mailman.nginx.org/pipermail/nginx/2011-May/026531.html
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following configuration causes nginx to hog cpu due to infinite loop
in ngx_http_upstream_get_peer():
upstream backend {
server 127.0.0.1:8080 down;
server 127.0.0.1:8080 down;
}
server {
...
location / {
proxy_pass http://backend;
}
}
Make sure we don't loop infinitely in ngx_http_upstream_get_peer() but stop
after resetting peer weights once.
Return 0 if we are stuck. This is guaranteed to work as peer 0 always exists,
and eventually ngx_http_upstream_get_round_robin_peer() will do the right
thing falling back to backup servers or returning NGX_BUSY.
|
|
|
|
|
|
|
|
| |
by ngx_http_upstream_create_round_robin_peer(), since the peer lives
only during request so the saved SSL session will never be used again
and just causes memory leak
patch by Maxim Dounin
|
| |
|
| |
|
|
|
|
|
|
| |
*) use ngx_snprintf() in ngx_inet_ntop() and ngx_sock_ntop()
as they are called just once per connection
*) NGX_INET_ADDRSTRLEN
|
|
|
|
|
|
| |
*) refactor ngx_palloc()
*) introduce ngx_pnalloc()
*) additional pool blocks have smaller header
|
| |
|