A .NET 8 Web API project that implements rate limiting functionality using Redis as a distributed cache. This API demonstrates how to protect endpoints from abuse by limiting the number of requests per client within a specified time window.
- Rate Limiting: Configurable rate limiting with Redis-based storage
- IP-based Client Identification: Automatically identifies clients by their IP address
- Distributed Rate Limiting: Uses Redis for scalable, distributed rate limiting across multiple server instances
- HTTP 429 Response: Returns proper HTTP status codes when rate limits are exceeded
- Configurable Limits: Easy to configure request limits and time windows
- .NET 8: Latest LTS version of .NET
- ASP.NET Core: Web framework for building HTTP APIs
- Redis: Distributed cache for storing rate limiting data
- StackExchange.Redis: High-performance Redis client for .NET
- .NET 8 SDK
- Redis server (local or remote)
git clone <repository-url>
cd RateLimitingApiUpdate the Redis connection string in appsettings.json:
{
"ConnectionStrings": {
"Redis": "localhost:6379"
}
}For production environments, replace localhost:6379 with your Redis server address.
dotnet runThe API will be available at https://localhost:5001 or http://localhost:5000.
- GET
/sample/hello- Returns a simple "Hello, World!" message
- Protected by rate limiting middleware
The rate limiting is configured with the following default settings:
- Request Limit: 100 requests per client
- Time Window: 1 minute
- Client Identification: Based on IP address
These settings can be modified in the RateLimitingService.cs file:
private readonly int _limit = 100;
private readonly TimeSpan _timeWindow = TimeSpan.FromMinutes(1);- Request Interception: The
RateLimitingMiddlewareintercepts all incoming requests - Client Identification: Each client is identified by their IP address
- Redis Storage: Request counts are stored in Redis with automatic expiration
- Limit Checking: Before processing each request, the system checks if the client has exceeded their limit
- Response: If the limit is exceeded, a 429 (Too Many Requests) status is returned
RateLimitingApi/
├── Controllers/
│ └── SampleController.cs # Example API controller
├── Middleware/
│ └── RateLimitingMiddleware.cs # Rate limiting middleware
├── Services/
│ └── RateLimitingService.cs # Core rate limiting logic
├── Program.cs # Application entry point
├── appsettings.json # Configuration file
└── RateLimitingApi.csproj # Project file
You can test the rate limiting functionality using curl or any HTTP client:
# Make multiple requests quickly
for i in {1..110}; do
curl -i http://localhost:5000/sample/hello
echo "Request $i"
doneAfter 100 requests within a minute, you should receive a 429 status code.
The rate limiting middleware is applied globally to all endpoints. To exclude specific endpoints, you can modify the middleware to check the request path:
// In RateLimitingMiddleware.cs
if (context.Request.Path.StartsWithSegments("/health"))
{
await _next(context);
return;
}To implement different rate limits for different endpoints, you can extend the RateLimitingService to accept endpoint-specific configurations.
- Redis Configuration: Use a production Redis instance with proper authentication and SSL
- Monitoring: Implement logging and monitoring for rate limiting events
- Whitelisting: Consider implementing IP whitelisting for trusted clients
- Rate Limit Headers: Add rate limit headers to responses (X-RateLimit-*)
- Configuration: Move rate limiting settings to configuration files
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.