NGINX (Pronounced as Engine-X) is an open-source, lightweight, high-performance web server, and proxy server. Nginx is used as a reverse proxy server for HTTP, HTTPS, SMTP, IMAP, and POP3 protocols, on the other hand, it is also used for server load balancing and HTTP Cache. Nginx accelerates content and application delivery, improves security, and facilitates availability, and scalability for the busiest websites on the Internet.
In layman’s terms, Nginx is a kind of software used as a web server to serve large concurrent requests. Earlier we used to install Apache as a web server to handle those functions, but the world is growing and demanding more things at one time, the term concurrency came into action and Nginx was introduced for the same issue.
This website also uses Nginx as a web server, and I’m able to boost the performance many times.
Why Apache is slow? How did Nginx take over?
Apache was introduced in 1995 when there was no concept of multitasking. Later when the need for multitasking was required MPM (Multi-Processing Module) was added in Apache to overcome the issue. But with this new feature, memory consumption started hogging in the coming years; on the other hand, giant sites started receiving millions of hits every day. So the need for a new web server or change in Apache was required to fix the issues.
This issue was named as C10K (Concurrent 10 Thousand) Problem.
Then Igor Sysoev started the development of Nginx in 2002 to overcome the same issue and the first time Nginx was publicly released in 2004. Nginx is lightweight and can handle large numbers of concurrent requests without hogging too much resources.
It solved the problem of C10K.
Also Read: Apache vs Nginx
Now (in 2014) Nginx hosts nearly 12% (22+ Million) of active sites across all domains.
How does Nginx work?
Nginx follows the event-based process; it does not create an individual thread of requests for each process as Apache does, but smartly follows process events. Below is a demonstration of an Nginx server handling concurrent MP3 and MP4 file requests.
Nginx divided its job into Worker Connections and Worker Processes. Here worker connections manage the request made and the response obtained by users on the web server; at the same time, these requests are passed to their parent process which is the Worker Process.
A single worker connection (See in Diagram: Worker Connections) can handle around 1024 connections at a time. It is the greatest ability of a worker connection.
There can be “n” numbers of the worker process in Nginx based on the type of server you have, and each worker process handles different jobs; so it can handle more concurrent requests.
Finally, the worker process transfers the requests to the Nginx Master Process, which quickly responds to the unique requests only.
Also Read: Improve Nginx Performance
Nginx is Asynchronous; that means each request in Nginx can be executed concurrently without blocking each other like a water pipe. So this way Nginx enhances the virtually shared resources without being dedicated and blocked to one connection. That is why Nginx can do the same work with less amount of memory and utilizes that memory in an optimized way.