# Real-Time 2.0 FAQ

## Where is the real-time layer? How do I know my data is actually there?

Every event processed by Real-Time 2.0 is written to two separate storage destinations simultaneously:

| Storage Layer | What It Stores | Update Speed | How to Verify |
|  --- | --- | --- | --- |
| **Real-Time Layer** | In-session behavioral data, real-time attributes, stitched identity links | Milliseconds after event ingestion | Use the Personalization API or check Realtime Storage via the Parent Segment configuration page |
| **Batch Layer** | Full event history, aggregated attributes, predictive scores, complete customer profiles | Minutes to hours, on workflow schedule | Query event tables in Data Workbench |


To verify that real-time data is being received, you can use **Data Workbench** to query the underlying event tables directly and confirm that raw events are landing in your Treasure Data storage.

## How does unification work? Where does ID stitching happen?

Profile unification in Real-Time 2.0 is a two-step process:

**Step 1 – ID Stitching (Realtime Decision Engine)**

When an event arrives, the Realtime Decision Engine immediately attempts to link the current user's identifiers (anonymous cookie ID, device ID, email hash, or other configured keys) to an existing customer profile. This happens in milliseconds and produces a unified identity that persists across devices and sessions.

**Step 2 – Profile Unification (Unify Realtime and Batch Data)**

After identity is resolved, the engine merges the live in-session data with the customer's historical profile from Batch Storage. This combined profile—containing both real-time attributes and historical metrics—is what powers personalization and activation decisions.

ID stitching keys and the attributes used for matching are configured in your **Parent Segment** under the ID Stitching settings. Changes to stitching keys take effect for new events immediately after the configuration is deployed.

## What's the difference between real-time and batch processing?

Real-Time 2.0 runs two processing pipelines that serve different purposes and complement each other:

|  | Realtime Pipeline | Batch Pipeline |
|  --- | --- | --- |
| **Trigger** | Every event, as it arrives | Scheduled (minutes to hours) |
| **Speed** | Milliseconds | Minutes to hours |
| **Data processed** | Individual events, in-session behavior | Full event history, large aggregations |
| **Outputs** | Updated real-time attributes, personalization decisions, triggered activations | Aggregated attributes, predictive scores, audience segments |
| **Best for** | Responding to what a customer is doing right now | Understanding who a customer is based on their full history |


**Combined decisioning example:** When a customer views a product page, the Realtime Pipeline detects the behavior instantly and triggers a personalized recommendation. That recommendation is enriched with the customer's purchase history, loyalty tier, and predicted next best offer—all supplied by the Batch Pipeline—resulting in a decision that is both timely and contextually accurate.

You do not need to choose between the two pipelines. Every event flows through both automatically.

## What's the latency? How fast is this really?

**End-to-end latency SLAs**

| Capability | Latency | Notes |
|  --- | --- | --- |
| **Personalization API** | ≤ 100ms (p95) | Time from API request to response, including real-time attribute lookup and batch profile merge |
| **Triggered Activations** | Up to 3 minutes | Time from event ingestion to activation delivery to downstream channel |
| **Batch Processing** | Minutes (varies by workflow schedule) | Depends on workflow configuration and data volume |


**Throughput capacity**

| Component | Default Capacity | Maximum (global) |
|  --- | --- | --- |
| **Event Ingestion** | 2,000 events/second | 100,000+ events/second |
| **Real-time Decisioning** | 8,000 events/second | — |
| **Triggered Activations** | 8,000 events/second | — |


If your expected event volume exceeds the default ingestion limit of 2,000 events/second, contact your Treasure Data account team to discuss capacity adjustments. Limits are configurable and can be scaled to support high-traffic use cases.