# Iterable Import Integration Iterable is a complete cross-channel customer engagement platform. Use it to message your customers via email, SMS, embedded messages, in-app messages, push notifications, and web push notifications—and to grow your customer base, boost engagement, and increase user lifetime value. The import integration enables TD users to connect and retrieve Campaign,List and Export Data from Iterable into Treasure Data. ## Prerequisites - Basic knowledge of Treasure Data - Iterable API Key ## Limitation - Due to getting the user list rate limit (5 requests/minutes), retrieving all lists in a single job is impossible. The user must specify a single list ID to import for each job. ## Obtaining API Key from Iterable 1. If using the EU instance of Iterable, navigate to [https://app.iterable.com/settings/apiKeys](https://app.iterable.com/settings/apiKeys) or [https://app.eu.iterable.com/settings/apiKeys](https://app.eu.iterable.com/settings/apiKeys) ![](/assets/screen_shot_2021-01-27_at_5_56_06_am.0f31f9c556a720eb69c3989b6be74cbfd0c4391a7c688645d842b67d4d94b53d.1ada2165.png) 2. Click **New API KEY** 3. Select **Standard (Server-side)** 4. **API Key** is available to authenticate from Treasure Data. ## Use the TD Console to Create Your Connection ### Create a New Connection In Treasure Data, you must create and configure the data connection before running your query. As part of the data connection, you provide authentication to access the integration. 1. Open **TD Console**. 2. Navigate to **Integrations Hub** >  **Catalog**. 3. Search for and select Iterable. 4. The following dialog opens. ![](/assets/screenshot-2025-07-09-at-15.44.03.bb98f8e651076a7f07d444b18e7745d2e9d91385fc91cc20606a5cd24e0ae7b5.1ada2165.png) 5. Enter your API Key and choose your Reigion. 6. Enter a name for your connection. 7. Select **Continue.** ### Transfer Your Data to Treasure Data After creating the authenticated connection, you are automatically taken to Authentications. Search for the connection you created. 1. Select **New Source**. 2. Type a name for your **Source** in the Data Transfer field**. ![](/assets/screen-shot-2021-01-27-at-6.03.00-am.585a020bbd84c481402aa3e7bd8c7814881fa5fd5467c901edf9b2dbbb859689.1ada2165.png)** 3. Select **Next**. The Source Table dialog opens. ![](/assets/screenshot-2025-07-09-at-15.10.06.59490fa93c240803d1c99ec4aed3c4dc24526af9ddb09b8b02976c67f421568f.1ada2165.png) ![](/assets/screenshot-2025-07-09-at-15.10.10.363ffee4d9a1a64539dc32f10ab4475dd7dfcd578e060d95941b8043daf31e07.1ada2165.png) ![](/assets/screenshot-2025-07-09-at-15.10.18.47c1023dccee8ddd3f8c5474745ef824725432d367719fdfde741db2abc72e5f.1ada2165.png) 4. Edit the following parameters: | Parameters | Description | | | --- | --- | --- | | **Data Type** | Data type to import: - Campaign - List - Export Data | | | **Export Data Type** | Specifies the type of data to export. Must match one of Iterable’s supported types. Supported [these data types](https://api.eu.iterable.com/api/docs#export_exportDataCsv). | | | **Canpaign id(s)** | Commas separate an array of the campaign's ID. Please leave it blank to import all campaigns. | | | **List id** | List id to fetch all users belonging to it | | | **Start Time** | For UI configuration, you can pick the date and time from the supported browser or input the date that suits the browser's expectation of date-time. For example, on Chrome, you will have a calendar to select Year, Month, Day, Hour, and Minute; on Safari, you must input the text such as 2020-10-25T00:00. For CLI configuration, we need a timestamp in RFC3339 UTC""Zul"" format, accurate to nanoseconds, "such as"2014-10-02T15:01:23"." | | | **End Time** | For UI configuration, you can pick the date and time from the supported browser or input the date that suits the browser's expectation of date-time. For example, on Chrome, you will have a calendar to select Year, Month, Day, Hour, and Minute; on Safari, you must input the text such as 2020-10-25T00:00. For CLI configuration, we need a timestamp in RFC3339 UTC""Zul"" format, accurate to nanoseconds, "such as"2014-10-02T15:01:23"." | | | **Number of Ids for Each Request** | The number of ids for one request. From 1 to 20 | | | **Incremental** | Import new data only from the last run. See About Incremental Loading. | | | **Use Date Range** | Enable to use the date range. | | | **Date Range** | A preconfigured date range, such as: - `"Today"`, `"Yesterday"`, `"BeforeToday"`, `"All"` - Useful for quick exports without specifying actual dates. | | | **Omit Fields (Optional)** | An array of field names to **include** in the export. If present, all other fields except these will be returned. Comma separates. | | | **Only Fields (Optional)** | An array of field names to **exclude** from the export. If present, only these fields will appear in the results. Comma separates. | | | **Campaign Id (Optional)** | Filters data to a specific campaign when exporting campaign-related events (e.g., `emailSend`). Useful to narrow the scope of data. | | ### Data Settings 1. Configure the data settings. ![](/assets/screenshot-2025-07-09-at-15.32.53.13f603d9ec4ae4e7cceb6afc45532f561cf864caf987ed7c4c17ffc5a78a95fc.1ada2165.png) | Parameter | Description | | --- | --- | | Retry Limit | The number of retries before the import fails. | | Initial retry time wait in millis | The initial time in milliseconds to wait before retrying. | | Max retry wait in millis | The maximum time in milliseconds to wait before retrying. | | Schema Settings | The schema was guessed from the sample data. Can modify the type and format before PREVIEW and RUN. | - **Noted:** - Only support takes the top-level 1 of the JSON field. Do not support parsing the key-value pairs of the JSON object. - Remember to change the JSON fields from string to JSON data type to get the data. Normally, the guessed field data type is a string. - Remember, DO NOT CHANGE the field name if not needed. - After you change anything in the custom query. You should check and edit the schema settings once again. ### Data Preview You can see a [preview](/products/customer-data-platform/integration-hub/batch/import/previewing-your-source-data) of your data before running the import by selecting Generate Preview. Data preview is optional and you can safely skip to the next page of the dialog if you choose to. 1. Select **Next**. The Data Preview page opens. 2. If you want to preview your data, select **Generate Preview**. 3. Verify the data. ### Data Placement For data placement, select the target database and table where you want your data placed and indicate how often the import should run. 1. Select **Next.** Under Storage, you will create a new or select an existing database and create a new or select an existing table for where you want to place the imported data. 2. Select a **Database** > **Select an existing** or **Create New Database**. 3. Optionally, type a database name. 4. Select a **Table**> **Select an existing** or **Create New Table**. 5. Optionally, type a table name. 6. Choose the method for importing the data. - **Append** (default)-Data import results are appended to the table. If the table does not exist, it will be created. - **Always Replace**-Replaces the entire content of an existing table with the result output of the query. If the table does not exist, a new table is created. - **Replace on New Data**-Only replace the entire content of an existing table with the result output when there is new data. 7. Select the **Timestamp-based Partition Key** column. If you want to set a different partition key seed than the default key, you can specify the long or timestamp column as the partitioning time. As a default time column, it uses upload_time with the add_time filter. 8. Select the **Timezone** for your data storage. 9. Under **Schedule**, you can choose when and how often you want to run this query. #### Run once 1. Select **Off**. 2. Select **Scheduling Timezone**. 3. Select **Create & Run Now**. #### Repeat Regularly 1. Select **On**. 2. Select the **Schedule**. The UI provides these four options: *@hourly*, *@daily* and *@monthly* or custom *cron*. 3. You can also select **Delay Transfer** and add a delay of execution time. 4. Select **Scheduling Timezone**. 5. Select **Create & Run Now**. After your transfer has run, you can see the results of your transfer in **Data Workbench** > **Databases.** ## Use the Command Line to Create Your Connection You can use the TD Console to configure your connection. ### Install the Treasure Data Toolbelt Install the newest [TD Toolbelt](https://toolbelt.treasuredata.com/). ### Create a Configuration File (seed.yml) The configuration file includes an in-section where you specify what comes into the connector from Iterable and an out-section where you specify what the connector puts out to the database in Treasure Data. ```yaml in: api_key: xxxxxxxxxxxxxxxx type: iterable data_type: export_data region: eu export_data_type: user use_date_range: false incremental: true start_time: '2024-01-01T08:51:00Z' end_time: '2024-12-01T08:51:00Z' ``` **Parameters Reference** | Name | Description | Value | Default Value | Required | | --- | --- | --- | --- | --- | | type | The source of the import. | "iterable" | | Yes | | api_key | API Key string generated on Iterable UI for integration. | String | | Yes | | region | The region that registers the api_key. Depending on the Endpoint, the base URL also gets changed. | String. - us - eu | "us" | Yes | | data_type | Data type to import: - campaign - list - export_data | String. | "campaign" | Yes | | export_data_type | Specifies the type of data to export. Must match one of Iterable’s supported types. Supported [these data types](https://api.eu.iterable.com/api/docs#export_exportDataCsv). | String | | Yes (if data_type is export_data). | | campaign_ids | Commas separate an array of the campaign's ID. Please leave it blank to import all campaigns. | String | | No | | list_id | List id to fetch all users belonging to it | String | | No | | incremental | Enable to use incremental loading. | Boolean. | False | No | | start_date | The beginning timestamp from which to export data | String. Format: yyyy-MM-dd'T'HH:mm:ss.SS'Z' | | No | | end_date | The ending timestamp where you want to finish exporting data | String. Format: yyyy-MM-dd'T'HH:mm:ss.SS'Z' | | No | | number_of_ids_for_each_request | The number of ids for one request. From 1 to 20. | Interger | 1 | No | | use_date_range | Enable to use the date range. | Boolean | False | Yes | | date_range | A preconfigured date range, such as: - `"Today"`, `"Yesterday"`, `"BeforeToday"`, `"All"` - Useful for quick exports without specifying actual dates. | String Supported value: - today - yesterday - before_today - all | "Today" | No | | omit_fields | An array of field names to **include** in the export. If present, all other fields except these will be returned. Comma separates. | String | | No | | only_fields | An array of field names to **exclude** from the export. If present, only these fields will appear in the results. Comma separates. | String | | No | | campaign_id | Filters data to a specific campaign when exporting campaign-related events (e.g., `emailSend`). Useful to narrow the scope of data. | String | | No | ### Run Guess Command to Generate Run Configuration File (load.yml) ``` $ td connector:guess seed.yml -o load.ymloutput will be below in: api_key: xxxxxxxxxxxxxxxxxx type: iterable data_type: export_data region: eu export_data_type: user use_date_range: false incremental: true start_time: '2024-01-01T08:51:00Z' end_time: '2024-12-01T08:51:00Z' columns: - {format: 'yyyy-MM-dd HH:mm:ss xxx', name: signup_date, type: timestamp} - {name: itbl_internal.email_domain, type: string} - {name: subscribed_message_type_ids, type: string} - {name: user_id, type: long} - {name: prefer_user_id, type: boolean} - {name: signup_source, type: string} - {name: user_id, type: long} - {name: user_list_ids, type: string} - {name: unsubscribed_message_type_ids, type: string} - {name: itbl_user_id, type: string} - {name: email, type: string} - {name: merge_nested_objects, type: boolean} - {format: 'yyyy-MM-dd HH:mm:ss xxx', name: profile_updated_at, type: timestamp} - {name: unsubscribed_channel_ids, type: string} ``` To guess the data, use the td connector:guess  command. (Should use guess first to guess the schema -> Modify schemas like expectation in "columns" property). ### Preview the Data to be Imported (Optional) You can preview data to be imported using the command td connector: preview. ``` $ td connector:preview load.yml ``` If the system detects your column name or type unexpectedly, modify `load.yml` it directly and preview again. Currently, the Data Connector supports parsing of""boolea"",""lon"",""doubl"",""strin"", and""timestam"" types. ### Execute the Load Job Submit the load job. Depending on the data size, it may take a couple of hours. Users need to specify the database and table where their data is stored. ``` $ td connector:issue load.yml --database td_sample_db --table td_sample_table ``` The previous command assumes you have already created a *database (td_sample_db)* and *a table (td_sample_table)*. If the database or table does not exist in TD, this command will not succeed. You can manually create the database and table or use the option with the command to auto-create them. ``` $ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at --auto-create-table ``` ### Scheduled execution You can schedule a periodic Data Connector execution for periodic Iterable import. We manage our scheduler carefully to ensure high availability. By using this feature, you no longer need a `cron` daemon on your local data center. ### Create the Schedule A new schedule can be created using the td connector: create command. The name of the schedule, cron-style schedule, the database and table where their data will be stored, and the data connector configuration file are required. The `cron` parameter accepts these options: `@hourly`, `@daily`, and `@monthly`. By default, the schedule is set up in UTC timezone. You can set the schedule in a timezone using the -t or --timezone option. The `--timezone` option only supports extended timezone formats like''Asia/Toky'',''America/Los Angele'' etc. Timezone abbreviations like PST and CST are *not* supported and may lead to unexpected schedules. ```bash $ td connector:create \ daily_import \ "10 0 * * *" \ td_sample_db \ td_sample_table \ load.yml ``` - Name of the schedule - The cron-style schedule - The database and table where their data will be stored - The Data Connector configuration file is required. Specifying the *--time-column* option is also recommended since TreasureData's storage is partitioned by time. ``` $ td connector:create \daily_import \"10 0 * * *" \td_sample_db \td_sample_table \load.yml \--time-column created_at ``` ### List the Schedules You can see the list of scheduled entries by entering the command td connector: list. ```bash td connector:List ``` ### Show the Setting and History of Schedules `td connector:show` shows the execution setting of a schedule entry. ``` $ td connector:show daily_iterable_import Name : daily_iterable_import Cron : 10 0 * * * Timezone : UTC Delay : 0 Database : sample_db Table : sample_table ``` `td connector:history`shows the execution history of a schedule entry. To investigate the results of each execution, use `td job:show jobid`. ``` | 577914 | success | 20000 | sample_db | sample_table | 0 | 2015-04-16 00:10:03 +0000 | 152 | | 577872 | success | 20000 | sample_db | sample_table | 0 | 2015-04-15 00:10:04 +0000 | 163 | | 577810 | success | 20000 | sample_db | sample_table | 0 | 2015-04-14 00:10:04 +0000 | 164 | | 577766 | success | 20000 | sample_db | sample_table | 0 | 2015-04-13 00:10:04 +0000 | 155 | | 577710 | success | 20000 | sample_db | sample_table | 0 | 2015-04-12 00:10:05 +0000 | 156 | | 577610 | success | 20000 | sample_db | sample_table | 0 | 2015-04-11 00:10:04 +0000 | 157 | +--------+---------+---------+-----------+--------------+----------+---------------------------+----------+ ``` ### Delete the Schedule ~~`td connector:delete`~~removes the schedule. ``` $ td connector:delete daily_iterable_import ``` ## References - API Overview: [https://api.iterable.com/api/docs](https://api.iterable.com/api/docs) - API Export Data:  [https://api.eu.iterable.com/api/docs#export_exportDataJson](https://api.eu.iterable.com/api/docs#export_exportDataJson)