Concurrent Requests
Run multiple API requests simultaneously and orchestrate the results.
Concurrent Requests
When building applications that depend on multiple independent endpoints, running requests synchronously (one after another) leads to water-falling network delays. axios_python provides first-class helpers to orchestrate concurrent requests effortlessly.
The all Helper
The all() function works similarly to Promise.all in JavaScript (or axios.all). It takes an iterable of awaitable request coroutines and executes them concurrently using asyncio.gather under the hood.
import asyncio
from axios_python import create, all
api = create({"base_url": "https://api.github.com"})
async def fetch_dashboard():
results = await all([
api.async_get("/users/octocat"),
api.async_get("/users/octocat/repos")
])
user_profile = results[0].json()
user_repos = results[1].json()
print(f"User {user_profile['login']} has {len(user_repos)} public repos.")
asyncio.run(fetch_dashboard())Note: Because
all()wrapsasyncio.gather, if any single request fails (e.g., raises anHTTPStatusErrororNetworkError), the exception will propagate immediately and the other requests may be left pending/orphaned depending on your cancellation strategy.
The spread Decorator
To cleanly unpack the sequence of results returned by all(), use the spread() decorator. It allows you to align each response with a named positional argument in a callback function.
import asyncio
from axios_python import create, all, spread
api = create({"base_url": "https://api.github.com"})
async def fetch_dashboard():
results = await all([
api.async_get("/users/octocat"),
api.async_get("/users/octocat/repos")
])
@spread
def render(profile_response, repos_response):
profile = profile_response.json()
repos = repos_response.json()
print(f"User {profile['login']} has {len(repos)} public repos.")
render(results)
asyncio.run(fetch_dashboard())This pattern keeps variable assignment clean and highly readable when dealing with >2 concurrent fetches.