Nginx Configuration¶
Nginx is a high-performance HTTP server, reverse proxy, and load balancer. Its event-driven architecture handles thousands of concurrent connections with minimal memory overhead, making it the industry standard for serving static content, terminating TLS, and proxying to application servers.
Configuration Structure¶
Nginx configuration is hierarchical. The main file is typically /etc/nginx/nginx.conf, which includes additional files for organization.
/etc/nginx/
├── nginx.conf # Main config (worker processes, global settings)
├── conf.d/ # General config snippets (loaded by default)
│ └── default.conf
├── sites-available/ # All virtual host configs (Debian/Ubuntu)
│ ├── default
│ └── example.com
├── sites-enabled/ # Symlinks to active configs
│ └── example.com -> ../sites-available/example.com
├── snippets/ # Reusable config fragments
│ └── ssl-params.conf
└── mime.types # File extension to MIME type mappings
The sites-available / sites-enabled pattern (common on Debian/Ubuntu) keeps all configurations in one place while controlling which ones are active via symlinks. RHEL-based distributions typically use conf.d/ exclusively.
Always test before reloading
Run nginx -t before applying any configuration change. It validates syntax and catches errors like missing semicolons or duplicate server_name directives without touching the running server. Reload with sudo systemctl reload nginx - reload is graceful (existing connections finish), while restart drops them.
Server Blocks¶
Server blocks are Nginx's equivalent of Apache virtual hosts. Each block defines how Nginx handles requests for a specific domain or IP/port combination.
server {
listen 80;
server_name example.com www.example.com;
root /var/www/example.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
listen: The port (and optionally IP address) to bind.listen 80binds to all interfaces on port 80.server_name: Domain names this block handles. Nginx matches theHostheader from the request against these values.root: The filesystem directory containing the site's files.index: Default files to serve when a directory is requested.try_files: Attempts each path in order. Here it tries the exact URI, then the URI as a directory, then returns 404.
How Nginx Selects a Server Block¶
When a request arrives, Nginx first matches the listen directive (IP + port), then matches server_name against the Host header. If no server_name matches, it falls back to the default_server:
This catch-all block drops requests for unknown domains - a basic security measure against scanners probing by IP address.
Location Blocks and Matching¶
Location blocks define how Nginx handles requests to specific URL paths. The matching rules have a defined precedence:
| Modifier | Type | Priority | Example |
|---|---|---|---|
= |
Exact match | 1 (highest) | location = /health { ... } |
^~ |
Prefix (stops search) | 2 | location ^~ /static/ { ... } |
~ |
Regex (case-sensitive) | 3 | location ~ \.php$ { ... } |
~* |
Regex (case-insensitive) | 3 | location ~* \.(jpg|png)$ { ... } |
| (none) | Prefix | 4 (lowest) | location /api/ { ... } |
Nginx evaluates locations in this order: exact matches first, then prefix matches (longest wins), then regex matches (first match in config order wins). The ^~ modifier on a prefix match prevents regex locations from overriding it.
# Exact match for health checks (fastest - no further searching)
location = /health {
return 200 "ok";
add_header Content-Type text/plain;
}
# All static files - ^~ prevents regex locations from overriding
location ^~ /static/ {
root /var/www;
expires 30d;
add_header Cache-Control "public, immutable";
}
# API requests proxied to the backend
location /api/ {
proxy_pass http://localhost:3000;
}
# Everything else
location / {
try_files $uri $uri/ /index.html;
}
Reverse Proxying¶
As a reverse proxy, Nginx sits between client browsers and backend application servers. It handles TLS termination, static file serving, and load distribution while your application focuses on business logic.
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
}
}
proxy_pass: The backend address. A trailing slash matters:proxy_pass http://backend/strips the matched location prefix from the forwarded URI; without the slash, the full URI is forwarded.proxy_set_header: Passes original client information to the backend. Without these headers, the backend sees all requests as coming from 127.0.0.1.- Timeouts:
proxy_connect_timeoutlimits how long Nginx waits to establish a connection.proxy_read_timeoutlimits how long it waits for a response.
WebSocket Proxying¶
WebSocket connections require Nginx to upgrade the HTTP connection:
location /ws/ {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 3600s; # Keep alive for up to 1 hour
}
Load Balancing¶
The upstream block distributes requests across multiple backend servers.
upstream api_servers {
least_conn; # Send to the server with fewest active connections
server 10.0.0.1:3000 weight=3;
server 10.0.0.2:3000;
server 10.0.0.3:3000 backup; # Only used if others are down
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://api_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Load balancing algorithms:
| Algorithm | Directive | Behavior |
|---|---|---|
| Round robin | (default) | Distributes requests evenly in order |
| Least connections | least_conn |
Sends to the server with fewest active connections |
| IP hash | ip_hash |
Routes the same client IP to the same server (session persistence) |
| Hash | hash $request_uri |
Routes by a custom key (e.g., URI, cookie) |
SSL/TLS Termination¶
Handling TLS at the Nginx layer is more efficient than doing it in the application. Nginx manages certificate negotiation and encryption while forwarding plain HTTP to the backend.
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern TLS configuration (Mozilla Intermediate)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# HSTS - tell browsers to always use HTTPS
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The first server block redirects all HTTP traffic to HTTPS. The second handles encrypted connections with modern cipher suites.
Security Headers¶
A well-configured Nginx adds security headers that instruct browsers to enable protections against common attacks.
# Add these in the server or http block
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
| Header | Purpose |
|---|---|
X-Frame-Options |
Prevents clickjacking by controlling iframe embedding |
X-Content-Type-Options |
Stops browsers from MIME-sniffing responses |
Strict-Transport-Security |
Forces HTTPS for the specified duration |
Content-Security-Policy |
Controls which sources can load scripts, styles, and other resources |
Referrer-Policy |
Controls how much URL information is sent in the Referer header |
Hide server version information
By default, Nginx exposes its version in response headers (Server: nginx/1.25.3) and error pages. Attackers use this to find known vulnerabilities for specific versions. Add server_tokens off; in the http block to suppress the version number.
Rate Limiting¶
Rate limiting protects your application from brute-force attacks, credential stuffing, and abusive clients. Nginx uses a leaky bucket algorithm to control request rates.
# Define the zone in the http block (10MB shared memory, 10 requests/second)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
# Stricter limit for authentication endpoints
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
server {
listen 80;
server_name example.com;
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://localhost:3000;
}
location /login {
limit_req zone=login_limit burst=3;
limit_req_status 429;
proxy_pass http://localhost:3000;
}
}
limit_req_zone: Creates a shared memory zone that tracks request rates per key (usually client IP). 10MB stores about 160,000 IP addresses.burst: Allows short bursts above the rate limit. Excess requests queue up.nodelay: Processes queued burst requests immediately instead of spacing them out.limit_req_status: Sets the HTTP status code for rejected requests (default 503, but 429 Too Many Requests is more correct).
Logging¶
Nginx produces two log types: access logs (one line per request) and error logs (server-side problems).
# Custom log format with timing information
log_format detailed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time urt=$upstream_response_time';
server {
access_log /var/log/nginx/example.access.log detailed;
error_log /var/log/nginx/example.error.log warn;
# Disable logging for health checks (reduces noise)
location = /health {
access_log off;
return 200 "ok";
}
}
Key timing variables:
- $request_time: Total time from first client byte to last response byte (includes backend processing).
- $upstream_response_time: Time the backend took to respond. If this is high but $request_time is similar, the bottleneck is your application, not Nginx.
Gzip Compression¶
Compression reduces bandwidth and speeds up page loads, especially for text-based assets.
# In the http block
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 4;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/xml
image/svg+xml;
gzip_vary: AddsVary: Accept-Encodingso caches store compressed and uncompressed versions separately.gzip_comp_level: 1-9 (higher = smaller files, more CPU). Level 4-6 is the sweet spot for most workloads.gzip_min_length: Skip compression for tiny responses where the overhead isn't worth it.gzip_types: Only compress text-based formats. Images like JPEG and PNG are already compressed.
Putting It Together: Production Configuration¶
Interactive Quizzes¶
Further Reading¶
- Nginx Documentation - official reference for all directives and modules
- Mozilla SSL Configuration Generator - generates secure TLS configurations for Nginx, Apache, and other servers
- Nginx Admin's Handbook - community-maintained guide covering performance tuning and security hardening
- Let's Encrypt / Certbot - free TLS certificates with automated Nginx configuration