Skip to main content

Limits & Quotas

This page documents all rate limits, payload limits, and quotas in Rynko. We believe in transparency - knowing your limits before you hit them.

API Rate Limits

Rate limits protect the platform and ensure fair usage. Limits are applied per API key.

note

Document generation endpoints (/api/v1/documents/generate and /api/v1/documents/generate/batch) do not have HTTP rate limits. Instead, they are protected by concurrent job limits and monthly quotas. This allows you to queue multiple jobs rapidly while the system manages processing capacity per your plan tier.

Template Endpoints

EndpointMethodLimitWindow
/api/v1/templatesGET300 requestsper minute
/api/v1/templatesPOST30 requestsper minute
/api/v1/templates/:idPATCH30 requestsper minute
/api/v1/templates/:idDELETE30 requestsper minute

Webhook Subscription Endpoints

EndpointMethodLimitWindow
/api/v1/webhook-subscriptionsGET/POST/PATCH/DELETE30 requestsper minute
/api/v1/webhook-subscriptions/:id/testPOST10 requestsper minute

Rate Limit Headers

Every API response includes rate limit headers:

X-RateLimit-Limit: 30
X-RateLimit-Remaining: 25
X-RateLimit-Reset: 1732723200
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the window
X-RateLimit-RemainingRemaining requests in current window
X-RateLimit-ResetUnix timestamp when the window resets

Rate Limit Exceeded Response

When you exceed the rate limit:

{
"statusCode": 429,
"code": "ERR_QUOTA_003",
"message": "Rate limit exceeded. Please retry after 45 seconds.",
"retryAfter": 45
}

Best Practice: Implement exponential backoff when you receive a 429 response.

Payload Limits

Request Limits

LimitValueNotes
Request body size10 MBJSON payload maximum
URL length8 KBIncluding query parameters
Header size16 KBTotal header size

Document Generation Limits

LimitValueNotes
Variables size1 MBJSON variables per request
Batch size100Documents per batch request
Variables per document100 KBIn batch requests

Document Generation Limits

Rynko enforces limits on document generation to protect system resources and ensure reliable performance.

PDF Limits

LimitValueWhy
Pages per PDF30Protects CPU/compute costs
Images per PDF50Protects memory usage
Output file size10 MBStorage and download limits
Page sizeA4 / LetterStandard sizes
Generation timeout60 secondsPer document

Excel Limits

LimitValueWhy
Rows per workbook10,000Protects memory/RAM
Columns per sheet50Protects file size
Sheets per workbook10Maximum sheets
Output file size10 MBStorage and download limits
Cell content32,767 charsExcel standard limit
Generation timeout120 secondsPer workbook

Template Processing Limits

LimitValueWhy
Loop iterations1,000Prevents hang-ups from large arrays
Nesting depth10 levelsPrevents infinite recursion
Components per template500Protects rendering performance

Limit Violation Errors

When you exceed document limits, Rynko immediately stops processing and returns a 400 Bad Request with a specific error code:

Error CodeLimitExample Message
ERR_LIMIT_001PDF pages"PDF generation exceeded 30 page limit. Your PDF has 45 pages."
ERR_LIMIT_002PDF images"PDF contains 75 images, exceeding the 50 image limit."
ERR_LIMIT_003Excel rows"Excel generation exceeded 10,000 row limit. Your data has 15,000 rows."
ERR_LIMIT_004Excel sheets"Excel workbook exceeded 10 sheet limit. Your workbook has 12 sheets."
ERR_LIMIT_005Excel columns"Excel sheet exceeded 50 column limit. Your sheet has 75 columns."
ERR_LIMIT_006File size"Generated document (12MB) exceeds maximum file size of 10MB."
ERR_LIMIT_007Loop iterations"Loop "items" has 2,500 items but maximum is 1,000. Consider pagination or splitting data."
ERR_LIMIT_008Nesting depth"Template exceeded 10 level nesting depth. Current depth: 12 levels."
ERR_LIMIT_009Component count"Template exceeded 500 component limit. Your template has 650 components."

Example Error Response

{
"statusCode": 400,
"code": "ERR_LIMIT_001",
"message": "PDF generation exceeded maximum page limit",
"timestamp": "2025-12-09T10:30:00.000Z",
"path": "/api/v1/documents/generate",
"relatedInfo": {
"actualPages": 45,
"maxPages": 30,
"message": "PDF generation exceeded 30 page limit. Your PDF has 45 pages."
}
}

How to Handle Limit Errors

  1. Split large datasets: Break data into multiple documents
  2. Paginate loops: Use pagination for arrays over 1,000 items
  3. Optimize images: Compress images before including in PDFs
  4. Simplify templates: Reduce nesting and component count
note

Pro tip: Validate your data size before calling the API. Check array lengths and estimate row counts client-side to avoid limit errors.

Template Limits

Content Limits

LimitValue
Variables per template500
Components per template200
Nested loop depth3 levels
Conditional nesting5 levels
Formula complexity1,000 operations
Template name255 characters
Template slug100 characters

Template Counts by Plan

PlanTemplates
Free2
Starter5
Growth20
Scale50

Account Quotas

Document Quotas by Plan

PlanDocuments/MonthAdditional Documents
Free50Purchase credit packs
Starter500Purchase credit packs
Growth2,500Purchase credit packs
Scale10,000Purchase credit packs

Credit Packs

Purchase additional documents beyond your monthly quota:

PackDocumentsPricePer Document
Micro100$10$0.10
Small500$40$0.08
Medium1,000$70$0.07
Large2,500$150$0.06
note

Credits never expire. They are consumed after your monthly quota is exhausted.

Team Limits

PlanTeam Members
Free1
Starter3
Growth10
ScaleUnlimited

API Key Limits

PlanAPI Keys
Free1
Starter3
Growth10
ScaleUnlimited

Concurrent Jobs

Maximum number of documents that can be actively processing simultaneously per team. This controls how many workers are busy for your team at any moment.

PlanConcurrent Jobs
Free1
Starter5
Growth10
Scale25

Backlog Limit

Maximum number of jobs that can be waiting in queue for processing. This prevents queue overflow and ensures reasonable wait times.

PlanBacklog Limit
Free10
Starter100
Growth1,000
Scale5,000

How the two limits work together:

Think of it like a restaurant:

  • Concurrent Jobs = Number of chefs cooking (determines speed)
  • Backlog = Size of the waiting area (determines capacity)
┌─────────────────────────────────────────────────────────────┐
│ Your Request → [Backlog Queue] → [Processing Workers] → Done │
│ (waiting) (cooking) │
│ │
│ Starter Tier: 100 max waiting 5 max processing │
└─────────────────────────────────────────────────────────────┘

Example flow for Starter tier:

  1. You submit 50 documents via batch API
  2. 5 jobs start processing immediately (concurrent limit)
  3. 45 jobs wait in the backlog queue
  4. As each job completes (~5-15 seconds), the next queued job starts
  5. All 50 documents complete in roughly 2-3 minutes

Why backlog limits matter:

Without a backlog limit, a user could queue millions of jobs that would take days to process, resulting in:

  • Stale/outdated documents by the time they're ready
  • Poor experience for other users sharing the system
  • Wasted resources on jobs that are no longer needed

Error when backlog limit exceeded:

{
"statusCode": 429,
"code": "ERR_QUOTA_012",
"message": "Job queue backlog limit exceeded",
"relatedInfo": {
"queuedJobs": 100,
"jobsToAdd": 50,
"maxBacklog": 100,
"subscriptionTier": "STARTER",
"suggestion": "Your queue has 100 jobs waiting. Wait for some jobs to complete before submitting more, or upgrade your plan for a larger backlog."
}
}
tip

For high-volume use cases, upgrade to a higher tier for larger backlog capacity. The backlog limit is designed to ensure your documents are generated within a reasonable timeframe.

Queue Processing Architecture

Understanding how Rynko processes document generation requests helps you optimize your integration.

┌─────────────────────────────────────────────────────────────────────────┐
│ Document Generation Flow │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ API Request ──► Backlog Check ──► Slot Check ──► Processing │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ [Rejected] [Queue Wait] [Rendering] │
│ HTTP 429 (seconds) (~500ms) │
│ │
└─────────────────────────────────────────────────────────────────────────┘

Request Flow:

  1. Backlog Check: Is there room in the queue?

    • Yes → Continue to slot check
    • No → Return 429 with ERR_QUOTA_012
  2. Slot Check: Is a processing slot available?

    • Yes → Start processing immediately
    • No → Add to pending queue, wait for slot
  3. Processing: Render the document (~500ms average)

    • Complete → Return download URL
    • Fail → Retry up to 3 times

System Capacity

Rynko is optimized for high-throughput document generation:

MetricValue
Maximum throughput~100 documents/second
Average processing time~500ms per document
Global concurrent limit120 jobs system-wide
note

The global concurrent limit (120 jobs) is shared across all users. During peak times, your jobs may queue briefly even if your team's concurrent limit isn't reached.

Handling Queue Responses

When submitting a job, you'll receive one of these responses:

Immediate Processing (slot available):

{
"jobId": "job_abc123",
"status": "processing",
"message": "Document generation started"
}

Queued (waiting for slot):

{
"jobId": "job_abc123",
"status": "queued",
"position": 45,
"message": "Job queued for processing"
}

Rejected (backlog full):

{
"statusCode": 429,
"code": "ERR_QUOTA_012",
"message": "Job queue backlog limit exceeded. Too many jobs waiting to be processed.",
"relatedInfo": {
"currentBacklog": 1000,
"backlogLimit": 1000,
"subscriptionTier": "GROWTH",
"suggestion": "Wait for some jobs to complete before submitting more, or upgrade your plan for a larger backlog."
}
}

Best Practices for High-Volume Generation

  1. Use webhooks instead of polling: Subscribe to document.generated events
  2. Implement client-side queuing: Buffer requests when you receive 429 errors
  3. Use batch endpoint: /api/v1/documents/generate/batch for multiple documents
  4. Monitor your queue: Check position in queued responses to estimate wait time
async function generateWithBackoff(payload, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch('/api/v1/documents/generate', {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}` },
body: JSON.stringify(payload)
});

if (response.status === 429) {
const data = await response.json();

if (data.code === 'ERR_QUOTA_012') {
// Backlog full - wait longer
const waitTime = Math.min(30000, 5000 * Math.pow(2, attempt));
console.log(`Backlog full, waiting ${waitTime/1000}s...`);
await sleep(waitTime);
continue;
}
}

return response.json();
}
throw new Error('Max retries exceeded - backlog consistently full');
}

Batch Size (Rows per Job)

Maximum rows that can be processed in a single batch request:

PlanRows per Job
Free5
Starter50
Growth500
Scale2,500

Webhook Limits

Availability by Plan

PlanWebhooks Available
Free
Starter
Growth
Scale

Delivery Limits

LimitValueNotes
Timeout30 secondsPer delivery attempt
Retries3 attemptsExponential backoff
Payload size256 KBMaximum webhook payload
Events per second100Per subscription

OAuth Limits

Token Limits

LimitValue
Access token lifetime1 hour
Refresh token lifetime30 days
Authorization code lifetime10 minutes
Active access tokens per user50
Active refresh tokens per user10

OAuth App Limits

PlanOAuth Apps
Free1
Starter5
Growth10
ScaleUnlimited

Storage Limits

Asset Storage

PlanStorage
Free100 MB
Starter500 MB
Growth2 GB
Scale10 GB

Generated File Retention

PlanRetention
Free3 days
Starter3 days
Growth3 days
Scale3 days

Requesting Limit Increases

If you need higher limits, you have options:

Upgrade Your Plan

Higher plans come with increased limits. See Pricing.

Enterprise Custom Limits

Scale customers can request custom limits:

  • Dedicated infrastructure
  • Custom rate limits
  • Higher storage quotas

Contact sales@rynko.dev

Temporary Increases

For one-time events (product launches, end-of-month reporting), request a temporary limit increase:

Email: support@rynko.dev

Include:

  • Account/Team ID
  • Requested limit increase
  • Duration needed
  • Use case description
info

Temporary increases are reviewed within 1 business day. Plan ahead for time-sensitive events.

Monitoring Your Usage

Dashboard

View your current usage in the dashboard:

  • Overview: Current document/credit usage
  • Settings → Usage: Detailed usage breakdown
  • Settings → API Keys: Per-key usage statistics

Alerts

Set up usage alerts in Settings → Notifications:

  • 50% quota usage
  • 80% quota usage
  • 100% quota usage
  • Rate limit warnings

Best Practices

Manage Concurrent Jobs

  1. Use batch endpoint: For multiple documents, use /api/v1/documents/generate/batch instead of multiple single requests
  2. Use webhooks: Subscribe to document.generated events instead of polling for job status
  3. Implement backoff: When you hit concurrent limits (429), wait for active jobs to complete before retrying
  4. Queue on your side: For high-volume workloads, implement a client-side queue to pace requests

Optimize Payload Size

  1. Compress images: Before including in templates
  2. Limit data: Only include necessary variables
  3. Paginate data: Split large datasets across multiple documents

Handle Limits Gracefully

async function generateWithRetry(payload, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch('/api/v1/documents/generate', {
method: 'POST',
headers: { 'Authorization': `Bearer ${API_KEY}` },
body: JSON.stringify(payload)
});

if (response.status === 429) {
// Could be concurrent job limit or rate limit
// Wait and retry with exponential backoff
const retryAfter = response.headers.get('Retry-After') || (2 ** i);
await sleep(retryAfter * 1000);
continue;
}

return response.json();
}
throw new Error('Max retries exceeded');
}