Iterable is a cross-channel marketing platform that allows you to create, optimize, and measure every interaction across the entire customer journey. Treasure Data supports the Iterable platform by allowing users to add or remove subscribers from an Iterable list.

Prerequisites

  • Basic Knowledge of Treasure Data

  • Basic knowledge of Iterable 

Obtaining API Key

  1. Navigate to https://app.iterable.com/settings/apiKeys
  2. Select New API KEY
  3. Select Standard (Server-side)


Use the TD Console to Create Your Connection

Create a New Connection

In Treasure Data, you must create and configure the data connection prior to running your query. As part of the data connection, you provide authentication to access the integration.

1. Open TD Console.
2. Navigate to Integrations Hub Catalog.
3. Search for and select Iterable.

4. Select Create Authentication.
5. Type the credentials to authenticate.
6. Type a name for your connection.
7. Select Done.


Define your Query

For adding subscribers, the connector only supports these fields.

NameTypeDescription
emailString
data_fieldsJSON
user_idString
prefer_user_idBooleanPrefer user_id over email
merge_nested_objectsBoolean


For removing subscribers, the connector only supports these fields

NameTypeDescription
emailString
user_idString

Example Query

SELECT email, user_id
FROM table my_table

Use Query Export Result to export data

  1. Complete the instructions in Creating a Destination Integration.
  2. Navigate to Data Workbench > Queries.

  3. Select a query for which you would like to export data.

  4. Run the query to validate the result set.

  5. Select Export Results.

  6. Select an existing integration authentication.
  7. Define any additional Export Results details. In your export integration content review the integration parameters.
    For example, your Export Results screen might be different, or you might not have additional details to fill out:
  8. Select Done.

  9. Run your query.

  10. Validate that your data moved to the destination you specified.


Integration Parameters for Iterable

Set the following parameters:


ParameterValuesDescription
actionStringadd/remove
list_idNumber
campaign_idNumber
channel_unsubscribeBoolean
skip_invalid_data
Boolean

Use Audience Studio to schedule and view your activation

  1. Open TD Console.
  2. Navigate to Audience Studio Segments & Funnels.
  3. Select the Segment for activation, right-click and select Create Activation.
  4. Enter the activation name and select the created Iterable authentication.
  5. Set the integration parameters for Iterable.
  6. Navigate to Output Mapping tab, uncheck Export All Columns to define column mapping from your segment schema to the export query. See supported fields in the previous section: Define your query.
  7. Navigate to Schedule tab to schedule the activation (optional).
  8. Click Create or Create & Run Now to create/run the activation.
  9. You can view the activation status and log in Workflows by selecting the pop-out beside the last attempted activation from the Activations tab.
     
  10. From the Workflow Logs tab, you can view the specified data, which may include such output results as "total", "skipped", "succeed", or "fail" record counts and the invalid data during validation, as well as any API errors in the case the data is returned via API. 

     


Optionally Schedule the Query Export Jobs

You can use Scheduled Jobs with Result Export to periodically write the output result to a target destination that you specify.


1. Navigate to Data Workbench > Queries.
2. Create a new query or select an existing query.
3. Next to Schedule, select None.

4. In the drop-down, select one of the following schedule options.

Drop-down ValueDescription
Custom cron...

Review Custom cron... details.

@daily (midnight)Run once a day at midnight (00:00 am) in the specified time zone.
@hourly (:00)Run every hour at 00 minutes.
NoneNo schedule.

Custom cron... Details

Cron Value

Description

0 * * * *

Run once an hour

0 0 * * *

Run once a day at midnight

0 0 1 * *

Run once a month at midnight on the morning of the first day of the month

""

Create a job that has no scheduled run time.

 *    *    *    *    *
 -    -    -    -    -
 |    |    |    |    |
 |    |    |    |    +----- day of week (0 - 6) (Sunday=0)
 |    |    |    +---------- month (1 - 12)
 |    |    +--------------- day of month (1 - 31)
 |    +-------------------- hour (0 - 23)
 +------------------------- min (0 - 59)

The following named entries can be used:

  • Day of Week: sun, mon, tue, wed, thu, fri, sat

  • Month: jan, feb, mar, apr, may, jun, jul, aug, sep, oct, nov, dec

A single space is required between each field. The values for each field can be composed of:

Field ValueExampleExample Description

a single value, within the limits displayed above for each field.



a wildcard ‘*’ to indicate no restriction based on the field. 

‘0 0 1 * *’ configures the schedule to run at midnight (00:00) on the first day of each month.
a range ‘2-5’, indicating the range of accepted values for the field.‘0 0 1-10 * *’ configures the schedule to run at midnight (00:00) on the first 10 days of each month.
a list of comma-separated values ‘2,3,4,5’, indicating the list of accepted values for the field.

0 0 1,11,21 * *’


configures the schedule to run at midnight (00:00) every 1st, 11th, and 21st day of each month.
a periodicity indicator ‘*/5’ to express how often based on the field’s valid range of values a schedule is allowed to run.

‘30 */2 1 * *’


configures the schedule to run on the 1st of every month, every 2 hours starting at 00:30. ‘0 0 */5 * *’ configures the schedule to run at midnight (00:00) every 5 days starting on the 5th of each month.
a comma-separated list of any of the above except the ‘*’ wildcard is also supported ‘2,*/5,8-10’‘0 0 5,*/10,25 * *’configures the schedule to run at midnight (00:00) every 5th, 10th, 20th, and 25th day of each month.
5.  (Optional) If you enabled the Delay execution, you can delay the start time of a query.

Execute the Query

Save the query with a name and run, or just run the query. Upon successful completion of the query, the query result is automatically imported to the specified container destination.


Scheduled jobs that continuously fail due to configuration errors may be disabled on the system side after several notifications.



Optionally Configure Export Results in Workflow

Within Treasure Workflow, you can specify the use of this data connector to export data.

Learn more about Using Workflows to Export Data with the TD Toolbelt.

Example Workflow for Iterable


_export:
  td:
  database: td.database

+iterable_export_task:
  td>: export_iterable.sql
  database: ${td.database}
  result_connection: iterable
  result_settings:
    type: iterable
    action: add
    list_id: 769207
    skip_invalid_data: false


















  • No labels