| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
MTU selection starts by doubling the initial MTU until the first failure.
Then binary search is used to find the path MTU.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is expected to help with clients using pipelining with some constant
depth, such as apt[1][2].
When downloading many resources, apt uses pipelining with some constant
depth, a number of requests in flight. This essentially means that after
receiving a response it sends an additional request to the server, and
this can result in requests arriving to the server at any time. Further,
additional requests are sent one-by-one, and can be easily seen as such
(neither as pipelined, nor followed by pipelined requests).
The only safe approach to close such connections (for example, when
keepalive_requests is reached) is with lingering. To do so, now nginx
monitors if pipelining was used on the connection, and if it was, closes
the connection with lingering.
[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=973861#10
[2] https://mailman.nginx.org/pipermail/nginx-devel/2023-January/ZA2SP5SJU55LHEBCJMFDB2AZVELRLTHI.html
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Response headers can be buffered in the SSL buffer. But stream's fake
connection buffered flag did not reflect this, so any attempts to flush
the buffer without sending additional data were stopped by the write filter.
It does not seem to be possible to reflect this in fc->buffered though, as
we never known if main connection's c->buffered corresponds to the particular
stream or not. As such, fc->buffered might prevent request finalization
due to sending data on some other stream.
Fix is to implement handling of flush buffers when the c->need_flush_buf
flag is set, similarly to the existing last buffer handling. The same
flag is now used for UDP sockets in the stream module instead of explicit
checking of c->type.
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Starting with FreeBSD 11, there is no need to use AIO operations to preload
data into cache for sendfile(SF_NODISKIO) to work. Instead, sendfile()
handles non-blocking loading data from disk by itself. It still can, however,
return EBUSY if a page is already being loaded (for example, by a different
process). If this happens, we now post an event for the next event loop
iteration, so sendfile() is retried "after a short period", as manpage
recommends.
The limit of the number of EBUSY tolerated without any progress is preserved,
but now it does not result in an alert, since on an idle system event loop
iteration might be very short and EBUSY can happen many times in a row.
Instead, SF_NODISKIO is simply disabled for one call once the limit is
reached.
With this change, sendfile(SF_NODISKIO) is now used automatically as long as
sendfile() is enabled, and no longer requires "aio on;".
|
|\| |
|
| |
| |
| |
| |
| |
| |
| | |
Similar to lingering_time, it limits total connection lifetime before
keepalive is switched off. The default is 1 hour, which is close to
the total maximum connection lifetime possible with default
keepalive_requests and keepalive_timeout.
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Keeping post_accept_timeout in ngx_listening_t is no longer needed since
we've switched to 1 second timeout for deferred accept in 5541:fdb67cfc957d.
Further, using it in HTTP code can result in client_header_timeout being
used from an incorrect server block, notably if address-specific virtual
servers are used along with a wildcard listening socket, or if we've switched
to a different server block based on SNI in SSL handshake.
|
| |
| |
| |
| |
| | |
It became feasible to reduce after feec2cc762f6 that
removes ngx_quic_connection_t from ngx_connection_s.
|
| | |
|
| |
| |
| |
| | |
Now QUIC connection is accessed via the c->udp field.
|
| |
| |
| |
| |
| |
| | |
The parameter allows processing HTTP/0.9-2 over QUIC.
Also, introduced ngx_http_quic_module and moved QUIC settings there
|
| |
| |
| |
| | |
This breaks graceful shutdown of QUIC connections in terms of quic-transport.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- events handling moved into src/event/ngx_event_quic.c
- http invokes once ngx_quic_run() and passes stream callback
(diff to original http_request.c is now minimal)
- streams are stored in rbtree using ID as a key
- when a new stream is registered, appropriate callback is called
- ngx_quic_stream_t type represents STREAM and stored in c->qs
|
|/ |
|
|
|
|
|
|
| |
Now a new structure ngx_proxy_protocol_t holds these fields. This allows
to add more PROXY protocol fields in the future without modifying the
connection structure.
|
|
|
|
|
|
|
|
|
| |
Previously, listenings sockets were not cloned if the worker_processes
directive was specified after "listen ... reuseport".
This also simplifies upcoming configuration check on the number
of worker connections, as it needs to know the number of listening
sockets before cloning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, only one client packet could be processed in a udp stream session
even though multiple response packets were supported. Now multiple packets
coming from the same client address and port are delivered to the same stream
session.
If it's required to maintain a single stream of data, nginx should be
configured in a way that all packets from a client are delivered to the same
worker. On Linux and DragonFly BSD the "reuseport" parameter should be
specified for this. Other systems do not currently provide appropriate
mechanisms. For these systems a single stream of udp packets is only
guaranteed in single-worker configurations.
The proxy_response directive now specifies how many packets are expected in
response to a single client packet.
|
| |
|
|
|
|
|
|
|
| |
With this change it is now possible to load modules compiled without
the "--with-http_ssl_module" configure option into nginx binary compiled
with it, and vice versa (if a module doesn't use ssl-specific functions),
assuming both use the "--with-compat" option.
|
|
|
|
|
|
| |
With this change it is now possible to load modules compiled without
the "--with-file-aio" configure option into nginx binary compiled with it,
and vice versa, assuming both use the "--with-compat" option.
|
|
|
|
|
|
|
| |
With this change it is now possible to load modules compiled without
the "--with-threads" configure option into nginx binary compiled with it,
and vice versa (if a module does not use thread-specific functions),
assuming both use the "--with-compat" option.
|
|
|
|
|
| |
Removed (NGX_HAVE_DEFERRED_ACCEPT && defined TCP_DEFER_ACCEPT)
from the signature accordingly.
|
|
|
|
| |
Removed NGX_HAVE_REUSEPORT from the signature accordingly.
|
|
|
|
|
|
|
|
|
|
| |
The IPV6_V6ONLY macro is now checked only while parsing appropriate flag
and when using the macro.
The ipv6only field in listen structures is always initialized to 1,
even if not supported on a given platform. This is expected to prevent
a module compiled without IPV6_V6ONLY from accidentally creating dual
sockets if loaded into main binary with proper IPV6_V6ONLY support.
|
|
|
|
|
| |
Also, removed practically unused flag accept_context_updated from
ngx_connection_t.
|
| |
|
|
|
|
| |
The function is called only with "struct sockaddr *" since 0.7.58.
|
| |
|
| |
|
| |
|
|
|
|
| |
The SPDY support is removed, as it's incompatible with the new module.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Iterating through all connections takes a lot of CPU time, especially
with large number of worker connections configured. As a result
nginx processes used to consume CPU time during graceful shutdown.
To mitigate this we now only do a full scan for idle connections when
shutdown signal is received.
Transitions of connections to idle ones are now expected to be
avoided if the ngx_exiting flag is set. The upstream keepalive module
was modified to follow this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When configured, an individual listen socket on a given address is
created for each worker process. This allows to reduce in-kernel lock
contention on configurations with high accept rates, resulting in better
performance. As of now it works on Linux and DragonFly BSD.
Note that on Linux incoming connection requests are currently tied up
to a specific listen socket, and if some sockets are closed, connection
requests will be reset, see https://lwn.net/Articles/542629/. With
nginx, this may happen if the number of worker processes is reduced.
There is no such problem on DragonFly BSD.
Based on previous work by Sepherosa Ziehau and Yingqi Lu.
|
|
|
|
| |
The http and stream versions of this macro were identical.
|
| |
|
| |
|
|
|
|
| |
It's mostly dead code and the original idea of worker threads has been rejected.
|
|
|
|
|
|
|
|
| |
This reduces layering violation and simplifies the logic of AIO preread, since
it's now triggered by the send chain function itself without falling back to
the copy filter. The context of AIO operation is now stored per file buffer,
which makes it possible to properly handle cases when multiple buffers come
from different locations, each with its own configuration.
|
|
|
|
|
|
|
|
| |
Client address specified in the PROXY protocol header is now
saved in the $proxy_protocol_addr variable and can be used in
the realip module.
This is currently not implemented for mail.
|
|
|
|
| |
It allows to use ngx_http_write_filter() and all its rate limiting logic.
|
|
|
|
|
|
|
| |
Fallback to synchronous sendfile() now only done on 3rd EBUSY without
any progress in a row. Not falling back is believed to be better
in case of occasional EBUSY, though protection is still needed to
make sure there will be no infinite loop.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
---
auto/unix | 12 ++++++++++++
src/core/ngx_connection.c | 32 ++++++++++++++++++++++++++++++++
src/core/ngx_connection.h | 4 ++++
src/http/ngx_http.c | 4 ++++
src/http/ngx_http_core_module.c | 21 +++++++++++++++++++++
src/http/ngx_http_core_module.h | 3 +++
6 files changed, 76 insertions(+)
|
|
|
|
|
|
|
| |
The c->single_connection was intended to be used as lock mechanism
to serialize modifications of request object from several threads
working with client and upstream connections. The flag is redundant
since threads in nginx have never been used that way.
|
|
|
|
|
|
|
|
|
|
| |
There is a general consensus that this change results in better
consistency between different operating systems and differently
tuned operating systems.
Note: this changes the width and meaning of the ipv6only field
of the ngx_listening_t structure. 3rd party modules that create
their own listening sockets might need fixing.
|
| |
|