This post is about Nginx – Optimizing Performance
Introduction – Optimizing Performance
In this series of articles, we’ll be looking at the various configuration options that we can tweak in order to improve performance in Nginx. Since there can be a lot to talk about regarding performance, I won’t be covering everything in
just one article.
Worker Processes
Starting the Nginx process launches what’s known as the Nginx master process.
It only deals with reading and evaluating the configuration. Nginx’s master
process is then responsible for spawning and supervising worker processes. The
part of Nginx which handles requests is done with those worker processes. The
efficiency of Nginx, then, is dependent mostly on these worker processes.
Nginx provides an option to specify the number of worker processes it should
spawn with the worker_processes
directive. If that directive isn’t used
anywhere in the configuration, then the default value is one. If our server has
multiple cores then we can dedicate a worker process for each core, thus
improving performance.
To find the number of cores in your server, run the following command.
On Linux:nproc
or lscpu
(for a detailed view)
On FreeBSD and OpenBSD:sysctl -n hw.ncpu
We can let Nginx use the optimal value by setting worker_processes auto;
in the main context of our nginx configuration.
In most of the configurations I have covered in my previous articles, we have kept the events
context empty. However, now that we know about worker processes, we can specify one additional parameter inside the events context
that Nginx provides to control our worker processes: worker_connections
.
The worker connections directive specifies the number of connections one worker process should accept. Since everything acts like a file on Unix-like operating systems, the value of the connections also depends on the number of files that can be opened at once. This can be determined with the built-in shell commandulimit
(type help ulimit
for the usage). Type ulimit -n
for the required number of files. On most systems, it should be 1024
.
Hence our configuration should look like the following:
# other configurations worker_processes auto; events { worker_connections 1024; } http { # other configurations } # other configurations
Buffering and Timeout
A buffer is a segment in memory that holds some data. Whenever nginx receives or handles some data, it may write that first to memory. This is called buffering.
We can tweak a few directives around this to optimize our requests and responses. All of this directive can be set in the http
context so that the values will be inherited by other sub-contexts like server
, location
, etc.
The effect of these directives can’t really be demonstrated. So I’ll just define a sample configuration and then explain what those directives mean in the
comments.
worker_processes auto; events { worker_connections 1024; } http { include mime.types; # Amount of memory to allocate for the post data from client client_body_buffer_size 10K; # don't accept POST requests of more than 8 MB client_max_body_size 8M; # buffer size for headers client_header_buffer_size 1K; # max time between consecutive requests to receive client headers # and body client_body_timeout 12; client_header_timeout 12; # Keep a connection open for 15 secs in case more data in on the way keepalive_timeout 15; # Abort sending responses if client doesn't receive any for 10 secs send_timeout 10; # skip buffering for static files sendfile on; # optimise sendfile packets tcp_nopush on; }
That’s it for this article. I still have a lot to cover, so I’ll come with more articles in the future. Stay tuned!