Without interface flow limiting, it may lead to server load imbalance, brute force password cracking, malicious requests, extra server charges, denial of service attacks, etc.
Therefore it is necessary to do a good job of interface flow limiting.
How to do interface current limiting? There are 4 common interface current limiting algorithms:
1.Fixed window counter
For example, there is a limit of 10 requests per hour, and anything over 10 is simply discarded. It has the disadvantage that sometimes more than 10 requests are sent, up to a maximum of 2 times. For example, if the fixed window is a full hour, 10 requests are sent between 8:50 and 9:00, and another 10 requests are sent between 9:00 and 9:10, which are released, but 20 requests are sent in the hour between 8:50 and 9:50.
2.Sliding window counter
This solves the problem of 1, but the higher the accuracy of the time interval division, the larger the space capacity required by the algorithm.
3.Leaky Bucket Algorithm
The leaky bucket algorithm is mostly implemented using a queue, where requests for services are stored in the queue, and the service provider takes requests out of the queue and executes them at a fixed rate, while too many requests are placed in the queue and queued or rejected directly. The drawback of the leaky bucket algorithm is obvious, when there is a large number of burst requests in a short period of time, even if there is no load on the server at that time, each request has to wait in the queue for some time before it can be responded.
4.Token Bucket Algorithm
Tokens are generated at a fixed rate. The generated tokens are stored in a token bucket. If the token bucket is full, the extra tokens are discarded directly, and when a request arrives, an attempt is made to fetch a token from the token bucket. If the bucket is empty, then requests that try to fetch tokens are discarded. The token bucket algorithm is able to both distribute all requests evenly over the time interval and accept burst requests that are within the server's reach.
Some of you may ask, why not limit the flow based on IP address? Actually, it can be done, just not so mainstream.
Since you have to restrict the flow based on IP address, two problems will arise, one is the preservation of IP address is a problem, if the interface is a cluster, you also have to save the IP address in a centralized database, preferably redis. two is that it will mislead the normal request, because a large LAN, whose exit IP is one, then restricting the request of this IP may cause the normal user to be trapped. .
The easiest and most practical of the above 4 methods is the sliding window counter.
Django REST Framework comes with flow limiting:
REST_FRAMEWORK = {
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.AnonRateThrottle',
'rest_framework.throttling.UserRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'anon': '100/day',
'user': '1000/day'
}
}
Here we share 3 ways to limit the flow of FastAPI:
1.slowapi
slowapi has been adapted from flask-limiter, and the counter is stored in memory by default, as follows:
from fastapi import FastAPI
from slowapi.errors import RateLimitExceeded
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address)
app = FastAPI()
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
@app.get("/home")
@limiter.limit("5/minute")
async def homepage(request: Request):
return PlainTextResponse("test")
@app.get("/mars")
@limiter.limit("5/minute")
async def homepage(request: Request, response: Response):
return {"key": "value"}
2.fastapi-limiter
Need a redis to hold the counter:
import aioredis
import uvicorn
from fastapi import Depends, FastAPI
from fastapi_limiter import FastAPILimiter
from fastapi_limiter.depends import RateLimiter
app = FastAPI()
@app.on_event("startup")
async def startup():
redis = await aioredis.create_redis_pool("redis://localhost")
FastAPILimiter.init(redis)
@app.get("/", dependencies=[Depends(RateLimiter(times=2, seconds=5))])
async def index():
return {"msg": "Hello World"}
if __name__ == "__main__":
uvicorn.run("main:app", debug=True, reload=True)
3.asgi-ratelimit
For example, blocking a specific user for 60 seconds after exceeding the five accesses per second limit:
app.add_middleware(
RateLimitMiddleware,
authenticate=AUTH_FUNCTION,
backend=RedisBackend(),
config={
r"^/user": [Rule(second=5, block_time=60)],
},
)
Above recommended slowapi, because the star is currently the most.