Rate Limiting
The Agent API enforces rate limits per API key to ensure fair usage and platform stability. Rate limits are applied on a per-minute sliding window basis.
Default Limits
| Limit Type | Default | Configurable |
|---|---|---|
| Requests per minute | 60 | Yes, via rate_limit_rpm |
| Window type | Sliding (1 minute) | No |
| Burst allowance | 10 | No |
Each API key starts with a default limit of 60 requests per minute. This can be increased per key for higher-throughput use cases.
Rate Limit Headers
Every API response includes rate limit headers so your agent can track usage in real time:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 57
X-RateLimit-Reset: 1710000060
| Header | Type | Description |
|---|---|---|
X-RateLimit-Limit | integer | Maximum requests allowed per minute for this key |
X-RateLimit-Remaining | integer | Requests remaining in the current window |
X-RateLimit-Reset | integer | Unix timestamp (seconds) when the current window resets |
These headers are present on all responses, including error responses. Use them proactively to pace your requests and avoid hitting the limit.
429 Too Many Requests
When the rate limit is exceeded, the API returns a 429 Too Many Requests response with a Retry-After header:
HTTP/1.1 429 Too Many Requests
Retry-After: 23
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1710000083
Content-Type: application/json
{
"success": false,
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Retry after 23 seconds."
}
}
The Retry-After value is the number of seconds your agent should wait before making another request.
Implementation Examples
Python with Automatic Retry
import time
import requests
class DobAgent:
BASE_URL = "https://home.dobprotocol.com/api/agent"
def __init__(self, api_key: str, max_retries: int = 3):
self.session = requests.Session()
self.session.headers.update({"X-API-Key": api_key})
self.max_retries = max_retries
def request(self, method: str, path: str, **kwargs) -> dict:
url = f"{self.BASE_URL}{path}"
for attempt in range(self.max_retries + 1):
resp = self.session.request(method, url, **kwargs)
# Log rate limit status
remaining = resp.headers.get("X-RateLimit-Remaining")
limit = resp.headers.get("X-RateLimit-Limit")
if remaining is not None:
print(f"Rate limit: {remaining}/{limit} remaining")
if resp.status_code == 429:
retry_after = int(resp.headers.get("Retry-After", 60))
if attempt < self.max_retries:
print(f"Rate limited. Waiting {retry_after}s "
f"(attempt {attempt + 1}/{self.max_retries})")
time.sleep(retry_after)
continue
else:
resp.raise_for_status()
resp.raise_for_status()
return resp.json()
def get(self, path: str, params: dict = None) -> dict:
return self.request("GET", path, params=params)
def post(self, path: str, json: dict = None) -> dict:
return self.request("POST", path, json=json)
# Usage
agent = DobAgent("dob_ak_your_key_here")
pools = agent.get("/pools", {"network_id": 10})
Python with Proactive Throttling
For high-throughput agents that need to stay under the limit without hitting 429:
import time
import requests
class ThrottledAgent:
BASE_URL = "https://home.dobprotocol.com/api/agent"
def __init__(self, api_key: str, rpm: int = 60):
self.session = requests.Session()
self.session.headers.update({"X-API-Key": api_key})
self.min_interval = 60.0 / rpm # seconds between requests
self.last_request_time = 0
def _throttle(self):
"""Wait if necessary to maintain the target request rate."""
elapsed = time.time() - self.last_request_time
if elapsed < self.min_interval:
time.sleep(self.min_interval - elapsed)
self.last_request_time = time.time()
def get(self, path: str, params: dict = None) -> dict:
self._throttle()
resp = self.session.get(f"{self.BASE_URL}{path}", params=params)
resp.raise_for_status()
return resp.json()
def post(self, path: str, json: dict = None) -> dict:
self._throttle()
resp = self.session.post(f"{self.BASE_URL}{path}", json=json)
resp.raise_for_status()
return resp.json()
# 60 RPM = 1 request per second max
agent = ThrottledAgent("dob_ak_your_key_here", rpm=60)
curl with Retry-After Handling
#!/bin/bash
API_KEY="dob_ak_your_key_here"
URL="https://home.dobprotocol.com/api/agent/pools?network_id=10"
MAX_RETRIES=3
for i in $(seq 1 $MAX_RETRIES); do
response=$(curl -s -w "\nHTTP_CODE:%{http_code}" \
-H "X-API-Key: $API_KEY" "$URL")
http_code=$(echo "$response" | grep "HTTP_CODE:" | cut -d: -f2)
body=$(echo "$response" | grep -v "HTTP_CODE:")
if [ "$http_code" = "429" ]; then
retry_after=$(curl -s -I -H "X-API-Key: $API_KEY" "$URL" \
| grep -i "Retry-After" | awk '{print $2}' | tr -d '\r')
echo "Rate limited. Retrying in ${retry_after}s (attempt $i/$MAX_RETRIES)..."
sleep "$retry_after"
else
echo "$body"
exit 0
fi
done
echo "Max retries exceeded."
exit 1
Custom Rate Limits
If your use case requires higher throughput, you can request a custom rate limit for your API key. Custom limits are configured per key, so different services can have different limits.
| Tier | RPM | Typical Use Case |
|---|---|---|
| Default | 60 | General-purpose agents, dashboards |
| Enhanced | 300 | Active trading bots |
| Enterprise | 1000+ | High-frequency integrations, aggregators |
To request a higher limit, contact the DobProtocol team through the support channels listed on dobprotocol.com.
Best Practices
-
Monitor the headers. Check
X-RateLimit-Remainingproactively. If remaining is low, slow down before hitting the limit. -
Respect
Retry-After. When you receive a 429, wait exactly the specified number of seconds. Do not retry immediately. -
Implement exponential backoff. On consecutive 429 responses, increase the wait time to avoid hammering the API during sustained load.
-
Cache responses. Pool metadata, marketplace stats, and listing data do not change every second. Cache responses locally and reduce request frequency.
-
Use webhooks instead of polling. For real-time events like distributions, marketplace sales, or crowdfunding updates, register webhooks instead of polling endpoints in a loop.
-
Use separate keys for separate concerns. If your trading bot hits the rate limit, your monitoring dashboard should not be affected. Create separate API keys with independent rate limits for each service.
-
Batch with pagination. When fetching large datasets, use the
limitparameter to get more results per request (up to 100) instead of making many small requests.