NGINX (Pronounced as Engine-X) is an open source, lightweight, high-performance web server, and proxy server. Nginx used as a reverse proxy server for HTTP, HTTPS, SMTP, IMAP, POP3 protocols, on the other hand, it is also used for servers load balancing and HTTP Cache. Nginx accelerates content and application delivery, improves security, facilitates availability and scalability for the busiest websites on the Internet.
In layman’s term, Nginx is a kind of software which is used as a web server to serve the large concurrent requests. Earlier we used to install Apache as a web server to handle those functions, but the world is growing and demanding more things at one time, the term concurrency comes in action and Nginx was introduced for the same issue.
This website also uses Nginx as web server, and I’m able to boost the performance by many times.

Why Apache is Slow? How Nginx Took Over?
Apache was introduced in 1995 when there was no concept multitasking. Later when the need for multitasking required then MPM (Multi-Processing Module) was added in Apache to overcome the issue. But with this new feature memory consumption starts hogging with the coming years; on other hand giant sites started receiving millions of hits every day. So the need for new platform or change in Apache was required to fix the issues.
This issue was named as C10K (Concurrent 10 Thousand) Problem.
Then Igor Sysoev started the development of Nginx in 2002 to overcome the same issue, and the first time Nginx was publicly released in 2004.
Nginx is lightweight in nature and can handle large numbers of concurrent requests without hogging too much resources. It solved the problem of C10K.
Also Read: Apache vs Nginx
Now (in 2014) Nginx hosts nearly over 12% (22+ Million) of active sites across all domains.
How Does Nginx Work?
Nginx follows the event-based process; it does not create an individual thread of request for each process like Apache does, but smartly follows events of a process. Below is the demonstration of an Nginx server handling concurrent MP3 and MP4 file requests.

Nginx divided its job into Worker Connections and Worker Process. Here worker connections are managing the request made and the response obtained by users on the web server; at the same time, these requests are passed to its parent process which is Worker Process.
A single worker connection (See in Diagram: Worker Connections) can handle around 1024 connections at a time. It is the greatest ability of a worker connection.
There can “n” numbers of the worker process in Nginx based on the type of server you have and each worker process handle different jobs so that it can handle more numbers of concurrent requests.
Finally, the worker process transfers the requests to Nginx Master Process which quickly responds to the unique requests only.
Also Read: Improve Nginx Performance
Nginx is Asynchronous; that means each request in Nginx can be executed concurrently without blocking each other like a water pipe. So this way Nginx enhances the virtually shared resources without being dedicated and blocked to one connection.
That is why Nginx is able to do the same work with less amount of memory and utilizes that memory in an optimized way.
Leave a Reply