Skip to main content

Documentation Index

Fetch the complete documentation index at: https://developers.kit.com/llms.txt

Use this file to discover all available pages before exploring further.

Some Kit API endpoints are backed by a distributed data system that syncs asynchronously. This means that after you write data — creating a subscriber, updating a field, applying a tag — a subsequent read may not immediately reflect that change. This is called eventual consistency, and it’s an intentional property of how our infrastructure is built. By separating our read and write systems, we’re able to support richer queries, advanced filters, and better performance at scale.

What to expect

You may see temporary differences such as:
  • A newly created subscriber not appearing in a list right away.
  • A list total count that is slightly off after a write.
  • Filter results (by tag, segment, engagement, etc.) that don’t yet include a just-written change.
In all cases, these differences resolve automatically once the system catches up.
Eventual consistency applies to list and reporting endpoints — for example, listing subscribers or filtering by tags. Direct lookups by ID (e.g., GET /v4/subscribers/:id) are not affected and return strongly consistent results.

How long is the delay?

Under normal conditions:
PercentilePropagation delay
P50 (median)~30 seconds
P99 (tail)Up to 5 minutes
For most requests, data will be consistent within about 30 seconds. In rare cases — typically under higher load — it may take up to 5 minutes before reads converge to the latest written state. For context, this is well within the range of delays common across the industry. Stripe’s reporting API, for example, can reflect updates with up to a 3-hour delay.

Which endpoints are affected?

Eventual consistency primarily affects endpoints that query or aggregate across subscribers:
  • GET /v4/subscribers — listing and filtering subscribers
  • Endpoints that return subscriber counts or totals
  • Endpoints that filter by tags, segments, custom fields, or engagement data
Endpoints that write data and return the resource in the response (e.g., POST /v4/subscribers, PUT /v4/subscribers/:id) are not affected — the response you receive from a write reflects the committed state of that resource.

How to build around eventual consistency

Don’t rely on read-after-write for confirmation

If you write subscriber data and immediately query a list to confirm it’s there, you may see stale or incomplete results. This is especially common in bulk import workflows, where job processing time compounds the propagation delay. Trust the write response — a successful 201 Created or 202 Accepted means the data has been saved or queued. You don’t need to verify by reading the list right away.
# ✅ Accepted: trust that the bulk job will complete
POST /v4/bulk/subscribers
 202 Accepted

# ⚠️ Avoid querying lists immediately after a bulk write
GET /v4/subscribers?created_after=2026-01-01T00:00:00Z
 May return incomplete results while the job processes and data syncs (~30s–5min)

Use the ID returned in write responses

Write responses return the full resource object, including the id. Use that ID directly for subsequent operations rather than re-querying a list to look it up.
# Capture the ID from the write response
POST /v4/subscribers { "subscriber": { "id": "abc123" } }

# Use it directly for subsequent operations
POST /v4/tags/:tag_id/subscribers { "subscriber_id": "abc123" }
GET  /v4/subscribers/abc123

Retry reads with backoff

If you need to confirm a resource is visible in a list, retry the read using exponential backoff rather than polling aggressively:
  • Start with a short delay (250–500ms).
  • Increase the delay on each retry.
  • Cap the maximum delay (e.g., 5 seconds).
  • Stop after a reasonable timeout (30–60 seconds) and surface a helpful error to the user.

Bulk workflows

Bulk endpoints process asynchronously in the background, adding processing time on top of the propagation delay. See Bulk & async processing for details on polling job status before reading back results.

Two-way sync integrations

If your app syncs subscriber data bidirectionally — writing to Kit and reading back to confirm state — the propagation delay can cause your sync to appear stale for a short window. We recommend:
  • Storing the id and known state locally at write time rather than re-fetching to verify.
  • Waiting at least 30–60 seconds before reading back, or using exponential backoff if you must poll.
  • Treating a successful write response as the source of truth, not the immediate read-back result.

Design your UI for a “syncing” state

If your app has a UI, acknowledge the reality of eventual consistency to your users:
  • Show a “Syncing…” or “Saving…” state after writes.
  • Offer a manual “Refresh” action so users can check for updated data.
  • Avoid flows that make hard decisions based on list results immediately after a write.

Troubleshooting

If you suspect eventual consistency is affecting your integration:
  • Confirm whether the workflow is read-after-write (a create or update followed by an immediate list or count read).
  • Add logging around write timestamps and subsequent read attempts to measure the actual delta.
  • Implement retries with backoff for reads that return missing or stale data.
  • If the issue persists beyond a 5-minute window, contact support with request IDs and timestamps — that’s outside the expected propagation range and may indicate a separate issue.

FAQ

Does this affect all endpoints? No. Write endpoints and direct ID lookups are strongly consistent. Eventual consistency applies to list, filter, and reporting endpoints that aggregate across subscribers. Can I force a fresh read? Not currently. Design your integration to tolerate short delays using the ID-based patterns, retry with backoff, and user-facing syncing states described above. Why does this happen at all? Our subscriber list and reporting data is backed by a separate analytics system that offers richer query capabilities than our transactional database. The trade-off is a short sync delay. The two systems converge reliably — unlike some caching layers that can return stale data indefinitely.
Have questions about how eventual consistency affects a specific use case? Reach out at support@kit.com or join the conversation in the Kit developer community.