Limits & Quotas
This page documents all rate limits, payload limits, and quotas in Rynko. We believe in transparency - knowing your limits before you hit them.
API Rate Limits
Rate limits protect the platform and ensure fair usage. Limits are applied per API key.
Document generation endpoints (/api/v1/documents/generate and /api/v1/documents/generate/batch) do not have HTTP rate limits. Instead, they are protected by concurrent job limits and monthly quotas. This allows you to queue multiple jobs rapidly while the system manages processing capacity per your plan tier.
Template Endpoints
| Endpoint | Method | Limit | Window |
|---|---|---|---|
/api/v1/templates | GET | 300 requests | per minute |
/api/v1/templates | POST | 30 requests | per minute |
/api/v1/templates/:id | PATCH | 30 requests | per minute |
/api/v1/templates/:id | DELETE | 30 requests | per minute |
Webhook Subscription Endpoints
| Endpoint | Method | Limit | Window |
|---|---|---|---|
/api/v1/webhook-subscriptions | GET/POST/PATCH/DELETE | 30 requests | per minute |
/api/v1/webhook-subscriptions/:id/test | POST | 10 requests | per minute |
Rate Limit Headers
Every API response includes rate limit headers:
X-RateLimit-Limit: 30
X-RateLimit-Remaining: 25
X-RateLimit-Reset: 1732723200
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the window |
X-RateLimit-Remaining | Remaining requests in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Rate Limit Exceeded Response
When you exceed the rate limit:
{
"statusCode": 429,
"code": "ERR_QUOTA_003",
"message": "Rate limit exceeded. Please retry after 45 seconds.",
"retryAfter": 45
}
Best Practice: Implement exponential backoff when you receive a 429 response.
Payload Limits
Request Limits
| Limit | Value | Notes |
|---|---|---|
| Request body size | 10 MB | JSON payload maximum |
| URL length | 8 KB | Including query parameters |
| Header size | 16 KB | Total header size |
Document Generation Limits
| Limit | Value | Notes |
|---|---|---|
| Variables size | 1 MB | JSON variables per request |
| Batch size | 100 | Documents per batch request |
| Variables per document | 100 KB | In batch requests |
Document Generation Limits
Rynko enforces limits on document generation to protect system resources and ensure reliable performance.
PDF Limits
| Limit | Value | Why |
|---|---|---|
| Pages per PDF | 30 | Protects CPU/compute costs |
| Images per PDF | 50 | Protects memory usage |
| Output file size | 10 MB | Storage and download limits |
| Page size | A4 / Letter | Standard sizes |
| Generation timeout | 60 seconds | Per document |
Excel Limits
| Limit | Value | Why |
|---|---|---|
| Rows per workbook | 10,000 | Protects memory/RAM |
| Columns per sheet | 50 | Protects file size |
| Sheets per workbook | 10 | Maximum sheets |
| Output file size | 10 MB | Storage and download limits |
| Cell content | 32,767 chars | Excel standard limit |
| Generation timeout | 120 seconds | Per workbook |
Template Processing Limits
| Limit | Value | Why |
|---|---|---|
| Loop iterations | 1,000 | Prevents hang-ups from large arrays |
| Nesting depth | 10 levels | Prevents infinite recursion |
| Components per template | 500 | Protects rendering performance |
Limit Violation Errors
When you exceed document limits, Rynko immediately stops processing and returns a 400 Bad Request with a specific error code:
| Error Code | Limit | Example Message |
|---|---|---|
ERR_LIMIT_001 | PDF pages | "PDF generation exceeded 30 page limit. Your PDF has 45 pages." |
ERR_LIMIT_002 | PDF images | "PDF contains 75 images, exceeding the 50 image limit." |
ERR_LIMIT_003 | Excel rows | "Excel generation exceeded 10,000 row limit. Your data has 15,000 rows." |
ERR_LIMIT_004 | Excel sheets | "Excel workbook exceeded 10 sheet limit. Your workbook has 12 sheets." |
ERR_LIMIT_005 | Excel columns | "Excel sheet exceeded 50 column limit. Your sheet has 75 columns." |
ERR_LIMIT_006 | File size | "Generated document (12MB) exceeds maximum file size of 10MB." |
ERR_LIMIT_007 | Loop iterations | "Loop "items" has 2,500 items but maximum is 1,000. Consider pagination or splitting data." |
ERR_LIMIT_008 | Nesting depth | "Template exceeded 10 level nesting depth. Current depth: 12 levels." |
ERR_LIMIT_009 | Component count | "Template exceeded 500 component limit. Your template has 650 components." |
Example Error Response
{
"statusCode": 400,
"code": "ERR_LIMIT_001",
"message": "PDF generation exceeded maximum page limit",
"timestamp": "2025-12-09T10:30:00.000Z",
"path": "/api/v1/documents/generate",
"relatedInfo": {
"actualPages": 45,
"maxPages": 30,
"message": "PDF generation exceeded 30 page limit. Your PDF has 45 pages."
}
}
How to Handle Limit Errors
- Split large datasets: Break data into multiple documents
- Paginate loops: Use pagination for arrays over 1,000 items
- Optimize images: Compress images before including in PDFs
- Simplify templates: Reduce nesting and component count
Pro tip: Validate your data size before calling the API. Check array lengths and estimate row counts client-side to avoid limit errors.
Template Limits
Content Limits
| Limit | Value |
|---|---|
| Variables per template | 500 |
| Components per template | 200 |
| Nested loop depth | 3 levels |
| Conditional nesting | 5 levels |
| Formula complexity | 1,000 operations |
| Template name | 255 characters |
| Template slug | 100 characters |
Template Counts by Plan
| Plan | Templates |
|---|---|
| Free | 2 |
| Starter | 5 |
| Growth | 20 |
| Scale | 50 |
Account Quotas
Document Quotas by Plan
| Plan | Documents/Month | Additional Documents |
|---|---|---|
| Free | 50 | Purchase credit packs |
| Starter | 500 | Purchase credit packs |
| Growth | 2,500 | Purchase credit packs |
| Scale | 10,000 | Purchase credit packs |
Credit Packs
Purchase additional documents beyond your monthly quota:
| Pack | Documents | Price | Per Document |
|---|---|---|---|
| Micro | 100 | $10 | $0.10 |
| Small | 500 | $40 | $0.08 |
| Medium | 1,000 | $70 | $0.07 |
| Large | 2,500 | $150 | $0.06 |
Credits never expire. They are consumed after your monthly quota is exhausted.
Team Limits
| Plan | Team Members |
|---|---|
| Free | 1 |
| Starter | 3 |
| Growth | 10 |
| Scale | Unlimited |
API Key Limits
| Plan | API Keys |
|---|---|
| Free | 1 |
| Starter | 3 |
| Growth | 10 |
| Scale | Unlimited |
Concurrent Jobs
Maximum number of documents that can be actively processing simultaneously per team. This controls how many workers are busy for your team at any moment.
| Plan | Concurrent Jobs |
|---|---|
| Free | 1 |
| Starter | 5 |
| Growth | 10 |
| Scale | 25 |
Backlog Limit
Maximum number of jobs that can be waiting in queue for processing. This prevents queue overflow and ensures reasonable wait times.
| Plan | Backlog Limit |
|---|---|
| Free | 10 |
| Starter | 100 |
| Growth | 1,000 |
| Scale | 5,000 |
How the two limits work together:
Think of it like a restaurant:
- Concurrent Jobs = Number of chefs cooking (determines speed)
- Backlog = Size of the waiting area (determines capacity)
┌─────────────────────────────────────────────────────────────┐
│ Your Request → [Backlog Queue] → [Processing Workers] → Done │
│ (waiting) (cooking) │
│ │
│ Starter Tier: 100 max waiting 5 max processing │
└─────────────────────────────────────────────────────────────┘
Example flow for Starter tier:
- You submit 50 documents via batch API
- 5 jobs start processing immediately (concurrent limit)
- 45 jobs wait in the backlog queue
- As each job completes (~5-15 seconds), the next queued job starts
- All 50 documents complete in roughly 2-3 minutes
Why backlog limits matter:
Without a backlog limit, a user could queue millions of jobs that would take days to process, resulting in:
- Stale/outdated documents by the time they're ready
- Poor experience for other users sharing the system
- Wasted resources on jobs that are no longer needed
Error when backlog limit exceeded:
{
"statusCode": 429,
"code": "ERR_QUOTA_012",
"message": "Job queue backlog limit exceeded",
"relatedInfo": {
"queuedJobs": 100,
"jobsToAdd": 50,
"maxBacklog": 100,
"subscriptionTier": "STARTER",
"suggestion": "Your queue has 100 jobs waiting. Wait for some jobs to complete before submitting more, or upgrade your plan for a larger backlog."
}
}
For high-volume use cases, upgrade to a higher tier for larger backlog capacity. The backlog limit is designed to ensure your documents are generated within a reasonable timeframe.
Queue Processing Architecture
Understanding how Rynko processes document generation requests helps you optimize your integration.
┌─────────────────────────────────────────────────────────────────────────┐
│ Document Generation Flow │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ API Request ──► Backlog Check ──► Slot Check ──► Processing │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ [Rejected] [Queue Wait] [Rendering] │
│ HTTP 429 (seconds) (~500ms) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Request Flow:
-
Backlog Check: Is there room in the queue?
- Yes → Continue to slot check
- No → Return
429withERR_QUOTA_012
-
Slot Check: Is a processing slot available?
- Yes → Start processing immediately
- No → Add to pending queue, wait for slot
-
Processing: Render the document (~500ms average)
- Complete → Return download URL
- Fail → Retry up to 3 times
System Capacity
Rynko is optimized for high-throughput document generation:
| Metric | Value |
|---|---|
| Maximum throughput | ~100 documents/second |
| Average processing time | ~500ms per document |
| Global concurrent limit | 120 jobs system-wide |
The global concurrent limit (120 jobs) is shared across all users. During peak times, your jobs may queue briefly even if your team's concurrent limit isn't reached.
Handling Queue Responses
When submitting a job, you'll receive one of these responses:
Immediate Processing (slot available):
{
"jobId": "job_abc123",
"status": "processing",
"message": "Document generation started"
}
Queued (waiting for slot):
{
"jobId": "job_abc123",
"status": "queued",
"position": 45,
"message": "Job queued for processing"
}
Rejected (backlog full):
{
"statusCode": 429,
"code": "ERR_QUOTA_012",
"message": "Job queue backlog limit exceeded. Too many jobs waiting to be processed.",
"relatedInfo": {
"currentBacklog": 1000,
"backlogLimit": 1000,
"subscriptionTier": "GROWTH",
"suggestion": "Wait for some jobs to complete before submitting more, or upgrade your plan for a larger backlog."
}
}
Best Practices for High-Volume Generation
- Use webhooks instead of polling: Subscribe to
document.generatedevents - Implement client-side queuing: Buffer requests when you receive 429 errors
- Use batch endpoint:
/api/v1/documents/generate/batchfor multiple documents - Monitor your queue: Check
positionin queued responses to estimate wait time
async function generateWithBackoff(payload, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch('/api/v1/documents/generate', {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}` },
body: JSON.stringify(payload)
});
if (response.status === 429) {
const data = await response.json();
if (data.code === 'ERR_QUOTA_012') {
// Backlog full - wait longer
const waitTime = Math.min(30000, 5000 * Math.pow(2, attempt));
console.log(`Backlog full, waiting ${waitTime/1000}s...`);
await sleep(waitTime);
continue;
}
}
return response.json();
}
throw new Error('Max retries exceeded - backlog consistently full');
}
Batch Size (Rows per Job)
Maximum rows that can be processed in a single batch request:
| Plan | Rows per Job |
|---|---|
| Free | 5 |
| Starter | 50 |
| Growth | 500 |
| Scale | 2,500 |
Webhook Limits
Availability by Plan
| Plan | Webhooks Available |
|---|---|
| Free | ❌ |
| Starter | ✅ |
| Growth | ✅ |
| Scale | ✅ |
Delivery Limits
| Limit | Value | Notes |
|---|---|---|
| Timeout | 30 seconds | Per delivery attempt |
| Retries | 3 attempts | Exponential backoff |
| Payload size | 256 KB | Maximum webhook payload |
| Events per second | 100 | Per subscription |
OAuth Limits
Token Limits
| Limit | Value |
|---|---|
| Access token lifetime | 1 hour |
| Refresh token lifetime | 30 days |
| Authorization code lifetime | 10 minutes |
| Active access tokens per user | 50 |
| Active refresh tokens per user | 10 |
OAuth App Limits
| Plan | OAuth Apps |
|---|---|
| Free | 1 |
| Starter | 5 |
| Growth | 10 |
| Scale | Unlimited |
Storage Limits
Asset Storage
| Plan | Storage |
|---|---|
| Free | 100 MB |
| Starter | 500 MB |
| Growth | 2 GB |
| Scale | 10 GB |
Generated File Retention
| Plan | Retention |
|---|---|
| Free | 3 days |
| Starter | 3 days |
| Growth | 3 days |
| Scale | 3 days |
Requesting Limit Increases
If you need higher limits, you have options:
Upgrade Your Plan
Higher plans come with increased limits. See Pricing.
Enterprise Custom Limits
Scale customers can request custom limits:
- Dedicated infrastructure
- Custom rate limits
- Higher storage quotas
Contact sales@rynko.dev
Temporary Increases
For one-time events (product launches, end-of-month reporting), request a temporary limit increase:
Email: support@rynko.dev
Include:
- Account/Team ID
- Requested limit increase
- Duration needed
- Use case description
Temporary increases are reviewed within 1 business day. Plan ahead for time-sensitive events.
Monitoring Your Usage
Dashboard
View your current usage in the dashboard:
- Overview: Current document/credit usage
- Settings → Usage: Detailed usage breakdown
- Settings → API Keys: Per-key usage statistics
Alerts
Set up usage alerts in Settings → Notifications:
- 50% quota usage
- 80% quota usage
- 100% quota usage
- Rate limit warnings
Best Practices
Manage Concurrent Jobs
- Use batch endpoint: For multiple documents, use
/api/v1/documents/generate/batchinstead of multiple single requests - Use webhooks: Subscribe to
document.generatedevents instead of polling for job status - Implement backoff: When you hit concurrent limits (429), wait for active jobs to complete before retrying
- Queue on your side: For high-volume workloads, implement a client-side queue to pace requests
Optimize Payload Size
- Compress images: Before including in templates
- Limit data: Only include necessary variables
- Paginate data: Split large datasets across multiple documents
Handle Limits Gracefully
async function generateWithRetry(payload, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch('/api/v1/documents/generate', {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}` },
body: JSON.stringify(payload)
});
if (response.status === 429) {
// Could be concurrent job limit or rate limit
// Wait and retry with exponential backoff
const retryAfter = response.headers.get('Retry-After') || (2 ** i);
await sleep(retryAfter * 1000);
continue;
}
return response.json();
}
throw new Error('Max retries exceeded');
}