axios_python logo
axios_python

Retry Engine

Configuring automatic retries and backoff strategies for flaky networks.

Retry Engine

Networks fail. Services restart. Rate limits kick in. The retry engine in axios_python handles transient failures automatically so your application code doesn't have to.

This page covers how the engine works, how to configure it, and how to extend it.


How It Works

The retry engine sits between the request interceptors and the response interceptors. When a request fails, the engine intercepts the exception, evaluates whether it's retryable, waits for the configured delay, then re-issues the original request — including all interceptors and middleware — from the beginning.

api.get("/endpoint")


┌─────────────────────┐
│  Request            │  Headers, auth, trace IDs
│  Interceptors       │  injected here
└────────┬────────────┘


┌─────────────────────┐
│  Middleware         │  Timing, logging, circuit
│  Pipeline           │  breakers run here
└────────┬────────────┘


┌─────────────────────┐
│  Transport          │  httpx makes the actual
│  (httpx)            │  network call
└────────┬────────────┘

    ┌────┴─────┐
    │          │
 success    failure
    │          │
    │          ▼
    │   ┌─────────────────────────────┐
    │   │  Should retry?              │
    │   │                             │
    │   │  retry_on(exc) → True?  ───────► wait delay ──► retry from top
    │   │  attempts < max_retries?    │
    │   │                             │
    │   │  No → raise exception   ────────────────────────────┐
    │   └─────────────────────────────┘                       │
    │                                                          │
    ▼                                                          ▼
┌─────────────────────┐                             exception propagates
│  Response           │                             to your call site
│  Interceptors       │
└────────┬────────────┘


    response returned
    to your call site

A few important behaviors that follow from this design:

Interceptors re-run on every attempt. Because the retry loops back to the top of the pipeline, your request interceptors fire again each time. This means a dynamic token provider in AuthPlugin will fetch a fresh token on each retry attempt — which is usually exactly what you want after a 401.

Middleware wraps the entire retry loop. A timing middleware registered with client.use() measures wall-clock time inclusive of all retry attempts and wait periods. If you want per-attempt timing, register it inside a custom RetryStrategy.

The engine only catches transport-layer failures by default. A successful HTTP response with a 500 status code does not raise an exception from httpx — it requires response.raise_for_status() to throw. By default, the retry engine never sees it. See Controlling What Gets Retried for how to change this.


Basic Configuration

Attach a retry strategy when creating a client instance. The strategy is shared across every request the instance makes.

from axios_python import ExponentialBackoff
import axios_python

api = axios_python.create({
    "base_url": "https://api.example.com",
    "max_retries": 3,
    "retry_strategy": ExponentialBackoff(base=1.0, multiplier=2.0, max_delay=10.0),
})

With this config, a failing request will be retried up to 3 times with delays of 1s → 2s → 4s, then raise RetryError if all attempts are exhausted.

You can also override retry settings on individual requests without changing the instance defaults:

# This specific request retries 5 times regardless of the instance default
response = api.get("/flaky-endpoint", max_retries=5)

Backoff Strategies

The three built-in strategies cover the most common production patterns.

FixedDelay

Waits an identical amount of time between every attempt. Use this when you're dealing with a known rate limit (Retry-After: 5) or a service that needs a fixed cooldown period before retrying.

from axios_python import FixedDelay

# Retries at t+2s, t+4s, t+6s
strategy = FixedDelay(delay=2.0)
attemptwait
12.0s
22.0s
32.0s

LinearBackoff

Increases wait time by a fixed increment on each attempt, calculated as base + (attempt × increment). Use this for moderate backoff without the steep growth of exponential strategies — well-suited for internal services where you expect recovery within seconds.

from axios_python import LinearBackoff

# Retries at t+1s, t+2s, t+3s, capped at 5s
strategy = LinearBackoff(base=1.0, increment=1.0, max_delay=5.0)
attemptformulawait
11.0 + (1 × 1.0)2.0s
21.0 + (2 × 1.0)3.0s
31.0 + (3 × 1.0)4.0s

ExponentialBackoff

Multiplies the wait time on each attempt, calculated as base × (multiplier ^ attempt). This is the industry standard for third-party APIs and public services — the rapid growth discourages hammering an already-struggling service. The max_delay cap prevents the wait from becoming impractically long.

from axios_python import ExponentialBackoff

# Retries at t+1s, t+2s, t+4s, capped at 30s
strategy = ExponentialBackoff(base=1.0, multiplier=2.0, max_delay=30.0)
attemptformulawait
11.0 × 2¹2.0s
21.0 × 2²4.0s
31.0 × 2³8.0s
41.0 × 2⁴16.0s
51.0 × 2⁵30.0s (capped)

Add jitter in high-concurrency environments

If many clients start simultaneously and all fail at once, pure exponential backoff causes them to retry in lockstep — creating waves of load called a thundering herd. Adding jitter (randomizing the delay within a range) spreads retries out over time. See Custom Strategies for a JitterBackoff implementation.


Controlling What Gets Retried

By default, the engine retries on NetworkError (connection refused, DNS failure, socket reset) and TimeoutError. It does not retry on HTTP error responses — those require an explicit raise_for_status() call and a custom retry_on predicate.

Pass a retry_on callable to override this behavior. The function receives the exception and returns True to retry, False to raise immediately.

from axios_python import HTTPStatusError, NetworkError, TimeoutError

def should_retry(exc: Exception) -> bool:
    # Always retry on transport failures
    if isinstance(exc, (NetworkError, TimeoutError)):
        return True

    # Retry on 429 (rate limited) and 5xx (server errors), not 4xx
    if isinstance(exc, HTTPStatusError):
        return exc.response.status_code == 429 or exc.response.status_code >= 500

    return False

api = axios_python.create({
    "base_url": "https://api.example.com",
    "max_retries": 4,
    "retry_strategy": ExponentialBackoff(base=1.0, multiplier=2.0),
    "retry_on": should_retry,
})

To retry on HTTP status codes, you must also ensure those responses raise an exception. Either configure your retry_on to call response.raise_for_status() internally, or use a response interceptor to raise HTTPStatusError before the retry engine evaluates it.


Custom Strategies

Subclass RetryStrategy and implement get_delay(attempt: int) -> float. The attempt argument is 1-indexed — 1 on the first retry, 2 on the second, and so on.

Jitter Backoff

Randomizes the delay within a range to prevent thundering herd problems when many clients retry simultaneously.

import random
from axios_python import RetryStrategy

class JitterBackoff(RetryStrategy):
    """Uniform random delay within [min_delay, max_delay]."""

    def __init__(self, min_delay: float = 0.5, max_delay: float = 5.0):
        self.min_delay = min_delay
        self.max_delay = max_delay

    def get_delay(self, attempt: int) -> float:
        return random.uniform(self.min_delay, self.max_delay)


api.get("/high-traffic-endpoint", max_retries=5, retry_strategy=JitterBackoff())

Decorrelated Jitter

A more sophisticated variant that tends to produce shorter average delays than exponential backoff while still avoiding lockstep retries. Popularized by AWS Architecture Blog.

import random
from axios_python import RetryStrategy

class DecorrelatedJitter(RetryStrategy):
    """
    Decorrelated jitter: each delay is random between `base` and 3× the previous delay.
    Produces shorter mean delays than full exponential while avoiding synchronization.
    """

    def __init__(self, base: float = 1.0, max_delay: float = 30.0):
        self.base = base
        self.max_delay = max_delay
        self._last_delay = base

    def get_delay(self, attempt: int) -> float:
        delay = random.uniform(self.base, self._last_delay * 3)
        self._last_delay = min(delay, self.max_delay)
        return self._last_delay

Configuration Reference

Prop

Type


Error Handling

When all retry attempts are exhausted, the engine raises RetryError. The original exception that caused the final failure is available as RetryError.__cause__.

import axios_python
from axios_python import RetryError, NetworkError

try:
    api.get("/unreliable-endpoint")
except RetryError as e:
    print(f"All {api.config['max_retries']} attempts failed.")
    print(f"Final error: {e.__cause__}")  # The underlying NetworkError or TimeoutError

On this page