axios_python logo
axios_python

Async vs Sync API Design in Python

Why we built a unified API for axios_python and how it benefits developers.

Two Clients Walk Into a Codebase

Here is a situation every Python developer has lived through at least once.

You find a nice HTTP library. Clean API, good docs, sensible defaults. You build your SDK around it. Then a user opens an issue: "Does this support async?" You check. It does not. So you do what any reasonable person does — you write a second version of your SDK. Same methods, same logic, same edge cases, different class name. GitHubClient and AsyncGitHubClient are born, and they will haunt you for the rest of the project's life.

This is not a niche problem. It is one of the most common sources of maintenance pain in the Python HTTP ecosystem today.


How We Got Here

Python's async story is genuinely good now. asyncio is stable, FastAPI and Starlette have made async web development feel natural, and httpx proved that a library can support both paradigms at the transport level. But "support both" has typically meant "ship two separate clients":

  • requests — synchronous only, no async story
  • aiohttp — asynchronous only, sync is an afterthought
  • httpx — both, but via httpx.Client and httpx.AsyncClient as distinct classes with separate connection pools and lifecycle management

The third option is the most interesting and the most frustrating. httpx does the right thing at the transport layer but leaves the unification problem entirely to you. If you build an SDK on top of it, you still end up with two classes and two test suites, because the API surfaces are different enough that sharing code between them gets awkward fast.

Here is what that looks like in practice:

import httpx

class GitHubClient:
    def get_user(self, username: str) -> dict:
        with httpx.Client(base_url="https://api.github.com") as client:
            return client.get(f"/users/{username}").json()

    def list_repos(self, username: str) -> list:
        with httpx.Client(base_url="https://api.github.com") as client:
            return client.get(f"/users/{username}/repos").json()


class AsyncGitHubClient:
    async def get_user(self, username: str) -> dict:
        async with httpx.AsyncClient(base_url="https://api.github.com") as client:
            return (await client.get(f"/users/{username}")).json()

    async def list_repos(self, username: str) -> list:
        async with httpx.AsyncClient(base_url="https://api.github.com") as client:
            return (await client.get(f"/users/{username}/repos")).json()

Two classes, four methods, zero lines of shared logic. Now add authentication, retries, response normalization, and rate limiting — and implement all of it twice. Then keep both implementations in sync as the API evolves. Then explain in your README which class to use and why.

It works. It is just quietly exhausting.


One Instance to Rule Them All

axios_python takes a different position: the execution context is a detail, not an identity. Sync and async are not different things — they are different call sites for the same operation.

A single axios_python instance holds everything: the base URL, default headers, timeout settings, retry strategy, interceptors, and middleware pipeline. You create it once, configure it once, and then choose at the call site whether you want a blocking or non-blocking response.

import axios_python

api = axios_python.create({
    "base_url": "https://api.github.com",
    "headers": {"Authorization": "Bearer token"},
})

# Blocking. Works great in Django, scripts, CLI tools.
user = api.get("/users/octocat").json()
repos = api.get("/users/octocat/repos").json()
import axios_python

api = axios_python.create({
    "base_url": "https://api.github.com",
    "headers": {"Authorization": "Bearer token"},
})

# Non-blocking. Drops straight into FastAPI routes or asyncio workers.
user = (await api.async_get("/users/octocat")).json()
repos = (await api.async_get("/users/octocat/repos")).json()

Same instance. Same config object. Same interceptors already registered on it. The only thing that changed was the method name.


What Happens in the Middle

Under the hood, api.get() routes through a synchronous pipeline backed by httpx.Client. api.async_get() routes through an asynchronous pipeline backed by httpx.AsyncClient. They share configuration state; they differ only in how they execute the transport call.

The interesting part is what happens to your interceptors and middleware when you switch contexts. axios_python inspects each function at registration time to determine whether it is a regular def or an async def, then executes it appropriately for the current pipeline.

Mixing sync and async interceptors

Register a synchronous interceptor and call api.async_get()? It runs normally inside the async pipeline. Register an async def interceptor and call the synchronous api.get()? axios_python runs it in a locally managed event loop — no changes required from you. Write your interceptors in whatever style matches your project; axios_python handles the context.

In practice, this means a token refresh interceptor, a response normalizer, or a structured logging middleware written once will work identically for your Django users and your FastAPI users. No forks, no if asyncio.get_event_loop().is_running() gymnastics, no version flags.


The SDK Collapses to One Class

Going back to the GitHub SDK example — here is what it looks like built on axios_python:

import axios_python
from axios_python import ExponentialBackoff

class GitHubClient:
    def __init__(self, token: str):
        self._api = axios_python.create({
            "base_url": "https://api.github.com",
            "timeout": 10.0,
            "max_retries": 3,
            "retry_strategy": ExponentialBackoff(base=0.5, multiplier=2.0),
            "headers": {
                "Authorization": f"Bearer {token}",
                "Accept": "application/vnd.github+json",
            },
        })

    def get_user(self, username: str) -> dict:
        return self._api.get(f"/users/{username}").json()

    def list_repos(self, username: str) -> list:
        return self._api.get(f"/users/{username}/repos").json()

    async def async_get_user(self, username: str) -> dict:
        return (await self._api.async_get(f"/users/{username}")).json()

    async def async_list_repos(self, username: str) -> list:
        return (await self._api.async_get(f"/users/{username}/repos")).json()

One class. One constructor. One place where auth headers, timeouts, and retry logic live. The async methods are thin wrappers around the same underlying configuration — not a parallel implementation of it. Adding a new endpoint means adding two methods, each a single line. That's the entire cost.


When to Reach for Which

The choice between sync and async in axios_python follows the same rules as anywhere else in Python — it is about your application's runtime, not the library's capabilities.

Reach for the synchronous methods when you are writing CLI tools, background jobs, Django views, or data scripts. The call stack is linear and easy to trace, and there is no event loop to reason about.

Reach for the async methods when you are in FastAPI route handlers, Starlette middleware, or any asyncio-based worker where blocking the event loop would hurt throughput.

The important thing is that the decision is reversible. Migrating a Django application to FastAPI incrementally? Your axios_python instance stays exactly the same. You change the method names at the call sites, and you are done.


Summary

Python's async ecosystem is mature and production-ready. The remaining friction is not in the runtime — it is in the library layer, where sync and async have historically meant two separate things to maintain.

axios_python treats them as one thing. One instance, one configuration, one interceptor pipeline. The execution context is a choice you make at the call site.

That's it. That is the whole idea.

On this page