We Are Going To Discuss About FastAPI asynchronous background tasks blocks other requests?. So lets Start this Python Article.
FastAPI asynchronous background tasks blocks other requests?
- How to solve fastapi asynchronous background tasks blocks other requests?
Your
task
is defined asasync
, which means fastapi (or rather starlette) will run it in the asyncio event loop.
And becausesomelongcomputation
is synchronous (i.e. not waiting on some IO, but doing computation) it will block the event loop as long as it is running.
I see a few ways of solving this:
Use more workers (e.g.uvicorn main:app --workers 4
). This will allow up to 4somelongcomputation
in parallel.
Rewrite your task to not beasync
(i.e. define it asdef task(data): ...
etc). Then starlette will run it in a separate thread.
Usefastapi.concurrency.run_in_threadpool
, which will also run it in a separate thread. Like so:from fastapi.concurrency import run_in_threadpool async def task(data): otherdata = await db.fetch("some sql") newdata = await run_in_threadpool(lambda: somelongcomputation(data, otherdata)) await db.execute("some sql", newdata)
Or useasyncios
'srun_in_executor
directly (whichrun_in_threadpool
uses under the hood):import asyncio async def task(data): otherdata = await db.fetch("some sql") loop = asyncio.get_running_loop() newdata = await loop.run_in_executor(None, lambda: somelongcomputation(data, otherdata)) await db.execute("some sql", newdata)
You could even pass in aconcurrent.futures.ProcessPoolExecutor
as the first argument torun_in_executor
to run it in a separate process.
Spawn a separate thread / process yourself. E.g. usingconcurrent.futures
.
Use something more heavy-handed like celery. (Also mentioned in the fastapi docs here). - fastapi asynchronous background tasks blocks other requests?
Your
task
is defined asasync
, which means fastapi (or rather starlette) will run it in the asyncio event loop.
And becausesomelongcomputation
is synchronous (i.e. not waiting on some IO, but doing computation) it will block the event loop as long as it is running.
I see a few ways of solving this:
Use more workers (e.g.uvicorn main:app --workers 4
). This will allow up to 4somelongcomputation
in parallel.
Rewrite your task to not beasync
(i.e. define it asdef task(data): ...
etc). Then starlette will run it in a separate thread.
Usefastapi.concurrency.run_in_threadpool
, which will also run it in a separate thread. Like so:from fastapi.concurrency import run_in_threadpool async def task(data): otherdata = await db.fetch("some sql") newdata = await run_in_threadpool(lambda: somelongcomputation(data, otherdata)) await db.execute("some sql", newdata)
Or useasyncios
'srun_in_executor
directly (whichrun_in_threadpool
uses under the hood):import asyncio async def task(data): otherdata = await db.fetch("some sql") loop = asyncio.get_running_loop() newdata = await loop.run_in_executor(None, lambda: somelongcomputation(data, otherdata)) await db.execute("some sql", newdata)
You could even pass in aconcurrent.futures.ProcessPoolExecutor
as the first argument torun_in_executor
to run it in a separate process.
Spawn a separate thread / process yourself. E.g. usingconcurrent.futures
.
Use something more heavy-handed like celery. (Also mentioned in the fastapi docs here).
Solution 1
Your task
is defined as async
, which means fastapi (or rather starlette) will run it in the asyncio event loop.
And because somelongcomputation
is synchronous (i.e. not waiting on some IO, but doing computation) it will block the event loop as long as it is running.
I see a few ways of solving this:
-
Use more workers (e.g.
uvicorn main:app --workers 4
). This will allow up to 4somelongcomputation
in parallel. -
Rewrite your task to not be
async
(i.e. define it asdef task(data): ...
etc). Then starlette will run it in a separate thread. -
Use
fastapi.concurrency.run_in_threadpool
, which will also run it in a separate thread. Like so:from fastapi.concurrency import run_in_threadpool async def task(data): otherdata = await db.fetch("some sql") newdata = await run_in_threadpool(lambda: somelongcomputation(data, otherdata)) await db.execute("some sql", newdata)
- Or use
asyncios
‘srun_in_executor
directly (whichrun_in_threadpool
uses under the hood):import asyncio async def task(data): otherdata = await db.fetch("some sql") loop = asyncio.get_running_loop() newdata = await loop.run_in_executor(None, lambda: somelongcomputation(data, otherdata)) await db.execute("some sql", newdata)
You could even pass in a
concurrent.futures.ProcessPoolExecutor
as the first argument torun_in_executor
to run it in a separate process.
- Or use
-
Spawn a separate thread / process yourself. E.g. using
concurrent.futures
. -
Use something more heavy-handed like celery. (Also mentioned in the fastapi docs here).
Original Author mihi Of This Content
Solution 2
Read this issue.
Also in the example below, my_model.function_b
could be any blocking function or process.
TL;DR
from starlette.concurrency import run_in_threadpool
@app.get("/long_answer")
async def long_answer():
rst = await run_in_threadpool(my_model.function_b, arg_1, arg_2)
return rst
Original Author Zhivar Sourati Of This Content
Conclusion
So This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.