Rate Limiting for Django Websites
2025-07-03
Sometimes, certain pages of a Django website might receive unwanted traffic from crawlers or malicious bots. These traffic spikes consume server resources and can make the website unusable for legitimate users. In this article, I will explore Nginx’s rate-limiting capabilities to prevent such performance issues.
What is rate limiting, and why use it?
A typical Django website is deployed using a Gunicorn (or Uvicorn for ASGI) application server, with an Nginx web server in front of it. When a request comes to Nginx, it goes through various checks, gets filtered by domain, protocol, and path, and is finally passed to Gunicorn. Gunicorn then runs Django, which parses the URL and returns a response from the appropriate view.
Nginx rate limiting allows you to limit how often certain pages (based on URL paths) or all Django endpoints can be accessed. It prevents quick reloading or scripted attacks that flood your site with requests. This is especially useful for pages with forms, faceted list views, REST APIs, or GraphQL endpoints.
A typical rate-limiting configuration in Nginx defines a zone with a specific memory size (in megabytes), a rate (requests per second or per minute), and one or more locations that apply that zone using options like burst
and nodelay
. When the rate limit is exceeded, Nginx returns a 429 Too Many Requests response.
You can imagine the burst
as a funnel. The requests are like small balls coming in quickly. The first request passes through immediately, and the others get stacked in the funnel. Any requests that don’t fit into the funnel are rejected with a 429 response. The rest are either delayed or processed immediately, depending on the configuration. If more requests come in and there’s space in the funnel, they’re stacked and processed according to the defined rate.
Practical example
Here is an example configuration that limits list views to 1 request per second, allowing up to 2 additional requests in a short burst before rejecting further ones. It also limits POST requests to authentication paths to 10 per minute, allowing bursts of up to 5 requests at a time. And a generous fallback for other URL paths. All excess requests beyond these limits will be rejected. Each zone has 10 MB of storage for metadata such as IP addresses, timestamps, and the current state of the funnel.
map $request_method $limit_key {
POST $binary_remote_addr;
default "";
}
limit_req_zone $binary_remote_addr zone=list_pages:10m rate=1r/s;
limit_req_zone $limit_key zone=auth_conditional:10m rate=10r/m;
limit_req_zone $binary_remote_addr zone=global_fallback:10m rate=20r/s;
server {
listen 443 ssl;
server_name www.pybazaar.com;
charset utf-8;
# ...
location ~* "^/(|resources|profiles|opportunities|buzz)/?$" {
limit_req zone=list_pages burst=2 nodelay;
limit_req_status 429;
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_redirect off;
proxy_read_timeout 300s;
}
location ~* "^/(signup|login)/?$" {
limit_req zone=auth_conditional burst=5 nodelay;
limit_req_status 429;
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_redirect off;
proxy_read_timeout 300s;
}
location / {
limit_req zone=global_fallback burst=40 nodelay;
limit_req_status 429;
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_redirect off;
proxy_read_timeout 300s;
}
# ...
}
Checking if it worked
Once you have this set up, you can test the rate limiting using the following bash script:
#!/usr/bin/env bash
for i in {1..20}; do
{
status=$(curl -w "%{http_code}" -o /dev/null -s \
https://www.pybazaar.com/opportunities/)
echo "Request $i: $status"
} &
done
wait
This script will output the HTTP status codes for 20 requests sent simultaneously, for example:
Request 7: 429
Request 4: 429
Request 8: 429
Request 3: 429
Request 19: 429
Request 15: 429
Request 9: 429
Request 13: 429
Request 6: 429
Request 20: 429
Request 18: 429
Request 10: 429
Request 17: 429
Request 11: 429
Request 16: 429
Request 14: 429
Request 12: 429
Request 5: 200
Request 1: 200
Request 2: 200
Keep in mind that the order of the requests and which ones succeed or fail won’t be in a simple sequence because Internet traffic takes different paths on its way to and from the server.
Next steps
The next step could be setting up the fail2ban service to ban IP addresses that receive the 429 response code too frequently. Should I write an article about that?
Conclusion
Rate limiting with Nginx is an effective way to protect your Django website from traffic spikes caused by crawlers or malicious bots. By setting limits on requests per second or minute—using zones and bursts—you can control how many requests each IP can make to different parts of your site. This helps prevent server overload and keeps the website responsive for real users.
Django Advanced Gunicorn Deployment Security Performance Nginx Uvicorn
Also by me
Django Paddle Subscriptions app
For Django-based SaaS projects.
Django GDPR Cookie Consent app
For Django websites that use cookies.