Skip to main content

Rate Limiting

The Agent API enforces rate limits per API key to ensure fair usage and platform stability. Rate limits are applied on a per-minute sliding window basis.

Default Limits

Limit TypeDefaultConfigurable
Requests per minute60Yes, via rate_limit_rpm
Window typeSliding (1 minute)No
Burst allowance10No

Each API key starts with a default limit of 60 requests per minute. This can be increased per key for higher-throughput use cases.

Rate Limit Headers

Every API response includes rate limit headers so your agent can track usage in real time:

X-RateLimit-Limit: 60
X-RateLimit-Remaining: 57
X-RateLimit-Reset: 1710000060
HeaderTypeDescription
X-RateLimit-LimitintegerMaximum requests allowed per minute for this key
X-RateLimit-RemainingintegerRequests remaining in the current window
X-RateLimit-ResetintegerUnix timestamp (seconds) when the current window resets

These headers are present on all responses, including error responses. Use them proactively to pace your requests and avoid hitting the limit.

429 Too Many Requests

When the rate limit is exceeded, the API returns a 429 Too Many Requests response with a Retry-After header:

HTTP/1.1 429 Too Many Requests
Retry-After: 23
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1710000083
Content-Type: application/json
{
"success": false,
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Retry after 23 seconds."
}
}

The Retry-After value is the number of seconds your agent should wait before making another request.

Implementation Examples

Python with Automatic Retry

import time
import requests

class DobAgent:
BASE_URL = "https://home.dobprotocol.com/api/agent"

def __init__(self, api_key: str, max_retries: int = 3):
self.session = requests.Session()
self.session.headers.update({"X-API-Key": api_key})
self.max_retries = max_retries

def request(self, method: str, path: str, **kwargs) -> dict:
url = f"{self.BASE_URL}{path}"

for attempt in range(self.max_retries + 1):
resp = self.session.request(method, url, **kwargs)

# Log rate limit status
remaining = resp.headers.get("X-RateLimit-Remaining")
limit = resp.headers.get("X-RateLimit-Limit")
if remaining is not None:
print(f"Rate limit: {remaining}/{limit} remaining")

if resp.status_code == 429:
retry_after = int(resp.headers.get("Retry-After", 60))
if attempt < self.max_retries:
print(f"Rate limited. Waiting {retry_after}s "
f"(attempt {attempt + 1}/{self.max_retries})")
time.sleep(retry_after)
continue
else:
resp.raise_for_status()

resp.raise_for_status()
return resp.json()

def get(self, path: str, params: dict = None) -> dict:
return self.request("GET", path, params=params)

def post(self, path: str, json: dict = None) -> dict:
return self.request("POST", path, json=json)

# Usage
agent = DobAgent("dob_ak_your_key_here")
pools = agent.get("/pools", {"network_id": 10})

Python with Proactive Throttling

For high-throughput agents that need to stay under the limit without hitting 429:

import time
import requests

class ThrottledAgent:
BASE_URL = "https://home.dobprotocol.com/api/agent"

def __init__(self, api_key: str, rpm: int = 60):
self.session = requests.Session()
self.session.headers.update({"X-API-Key": api_key})
self.min_interval = 60.0 / rpm # seconds between requests
self.last_request_time = 0

def _throttle(self):
"""Wait if necessary to maintain the target request rate."""
elapsed = time.time() - self.last_request_time
if elapsed < self.min_interval:
time.sleep(self.min_interval - elapsed)
self.last_request_time = time.time()

def get(self, path: str, params: dict = None) -> dict:
self._throttle()
resp = self.session.get(f"{self.BASE_URL}{path}", params=params)
resp.raise_for_status()
return resp.json()

def post(self, path: str, json: dict = None) -> dict:
self._throttle()
resp = self.session.post(f"{self.BASE_URL}{path}", json=json)
resp.raise_for_status()
return resp.json()

# 60 RPM = 1 request per second max
agent = ThrottledAgent("dob_ak_your_key_here", rpm=60)

curl with Retry-After Handling

#!/bin/bash
API_KEY="dob_ak_your_key_here"
URL="https://home.dobprotocol.com/api/agent/pools?network_id=10"
MAX_RETRIES=3

for i in $(seq 1 $MAX_RETRIES); do
response=$(curl -s -w "\nHTTP_CODE:%{http_code}" \
-H "X-API-Key: $API_KEY" "$URL")

http_code=$(echo "$response" | grep "HTTP_CODE:" | cut -d: -f2)
body=$(echo "$response" | grep -v "HTTP_CODE:")

if [ "$http_code" = "429" ]; then
retry_after=$(curl -s -I -H "X-API-Key: $API_KEY" "$URL" \
| grep -i "Retry-After" | awk '{print $2}' | tr -d '\r')
echo "Rate limited. Retrying in ${retry_after}s (attempt $i/$MAX_RETRIES)..."
sleep "$retry_after"
else
echo "$body"
exit 0
fi
done
echo "Max retries exceeded."
exit 1

Custom Rate Limits

If your use case requires higher throughput, you can request a custom rate limit for your API key. Custom limits are configured per key, so different services can have different limits.

TierRPMTypical Use Case
Default60General-purpose agents, dashboards
Enhanced300Active trading bots
Enterprise1000+High-frequency integrations, aggregators

To request a higher limit, contact the DobProtocol team through the support channels listed on dobprotocol.com.

Best Practices

  1. Monitor the headers. Check X-RateLimit-Remaining proactively. If remaining is low, slow down before hitting the limit.

  2. Respect Retry-After. When you receive a 429, wait exactly the specified number of seconds. Do not retry immediately.

  3. Implement exponential backoff. On consecutive 429 responses, increase the wait time to avoid hammering the API during sustained load.

  4. Cache responses. Pool metadata, marketplace stats, and listing data do not change every second. Cache responses locally and reduce request frequency.

  5. Use webhooks instead of polling. For real-time events like distributions, marketplace sales, or crowdfunding updates, register webhooks instead of polling endpoints in a loop.

  6. Use separate keys for separate concerns. If your trading bot hits the rate limit, your monitoring dashboard should not be affected. Create separate API keys with independent rate limits for each service.

  7. Batch with pagination. When fetching large datasets, use the limit parameter to get more results per request (up to 100) instead of making many small requests.