Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

You can import Media Campaign, Paid Search Campaign, Site Campaign, User Audience Segment Map, Segment Mapping File or Dissent Lists from Salesforce DMP (Krux) into Treasure Data.


This data connector is in Beta. For more information, contact

Table of Contents


  • Basic knowledge of Treasure Data, including the Toolbelt and JavaScript SDK

  • S3 credential with access key id and secret access key.

  • Client name from Salesforce DMP

Integration Overview

This integration has two parts:

  1. Cookie-syncing between Salesforce DMP and Treasure Data CDP: Required to create a mapping between Salesforce DMP ID and Treasure Data ID's td_global_id & td_client_id

  2. Data import from Salesforce DMP into Treasure Data CDP: There are various data feeds that can be brought in. For the purpose of data enrichment, the key file is the mapping between Segment IDs and their names.

Implement a Cookie-Syncing Tag

You must first set up Treasure Data's JavaScript tag as documented in Getting Started with Website Tracking under "Setting up website tracking and install the Treasure Data JavaScript SDK".


The preceding code sample does not include cookie-syncing for Safari browsers. Safari's Intelligent Tracking Prevention (ITP) feature makes 3rd party domain cookie-based visitor identification less reliable. We are actively planning a solution around this.

Use the TD Console to Create Your Connection

Create a New Connection

Go to Integrations Hub > Catalog and search and select Salesforce DMP.


Name your new Salesforce DMP Connection. Select Done.

Transfer Your Data to Treasure Data

After creating the authenticated connection, you are automatically taken to the Authentications tab. Look for the connection you created and select New Source.


  • Segment Mapping File

  • User Audience Segment Map

  • Media Campaign, Paid Search Campaign, Site Campaign or Dissent Lists

Import Segment Mapping File

For the Source, choose Segment Mapping File.

Import User Audience Segment Map

For the Source, choose User Audience Segment Map.


  • Import Date: Import data created from this date.

Import Media Campaign, Paid Search Campaign, Site Campaign, Dissent Lists

For the Source, choose Media Campaign, Paid Search Campaign, Site Campaign or Dissent Lists.


  • Start Date: Import data that has been created since this date.

  • End Date: Import data that has been created up to this date.

  • Incremental Loading: When importing data based on a schedule, the time window of the fetched data automatically shifts forward on each run. For example, if you specify the initial start date as January 1 and end date as January 10, the first run fetches data from January 1 to January 10, the second run fetches from January 11 to January 20, and so on.


You’ll see a preview of your data. To make changes, select Advanced Settings, otherwise select Next.

Advanced Settings

You can specify the following parameters:

  • Maximum retry times. Specifies the maximum retry times for each API call.

    Code Block
      Type: number
      Default: 7

  • Initial retry interval millisecond. Specifies the wait time for the first retry.

    Code Block
      Type: number
      Default: 1000

  • Maximum retry interval milliseconds. Specifies the maximum time between retries.

    Code Block
      Type: number
      Default: 120000

Choose the Target Database and Table

Choose existing ones or create a new database and table.


If you want to set a different partition key seed rather than use the default key, you can specify one using the popup menu.


In the When tab, you can specify one-time transfer, or schedule an automated recurring transfer.


  • Once now: set one time job.

  • Repeat…

    • Schedule: accepts these three options: @hourly, @daily and @monthly and custom cron.

    • Delay Transfer: add a delay of execution time.

  • TimeZone: supports extended timezone formats like ‘Asia/Tokyo’.


Name your Transfer and select Done to start.


After your transfer has run, you can see the results of your transfer in the Databases tab.

Use the Command Line to create your Salesforce DMP connection

You can use the TD Console to configure your connection.

Install the Treasure Data Toolbelt

Install the newest TD Toolbelt.

Create a Configuration File (load.yml)

The configuration file includes an in: section where you specify what comes into the connector from Salesforce DMP and an out: section where you specify what the connector puts out to the database in Treasure Data. For more details on available out modes, see the Appendix.


Code Block
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: smfout: mode: append

Preview the Data to be Imported (Optional)

You can preview data to be imported using the command td connector:preview.

Code Block
$ td connector:preview load.yml 

Execute the Load Job

You use td connector:issue to execute the job.


From the command line, submit the load job. Processing might take a couple of hours depending on the data size.

Scheduled execution

You can schedule periodic data connector execution for periodic Media Campaign, Paid Search Campaign, Site Campaign import. We configure our scheduler carefully to ensure high availability. By using this feature, you no longer need a cron daemon on your local data center.


See How Incremental Loading works for details and examples.

Create the schedule

A new schedule can be created using the td connector:create command. The name of the schedule, cron-style schedule, the database and table where their data will be stored, and the data connector configuration file are required.


Code Block
$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml \
    --time-column created_at

List the Schedules

You can see the list of scheduled entries by entering the command td connector:list.

Code Block
$ td connector:list
| Name         | Cron       | Timezone | Delay | Database     | Table           | Config                                     |
| daily_import | 10 0 * * * | UTC      | 0     | td_sample_db | td_sample_table | {"in"=>{"type"=>"krux_dmp",  |

Show the Schedule Settings and History of Schedules

td connector:show shows the execution setting of a schedule entry.


Code Block
% td connector:history daily_import
| JobID  | Status  | Records | Database     | Table           | Priority | Started                   | Duration |
| 578066 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-28 00:10:05 +0000 | 160      |
| 577968 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-27 00:10:07 +0000 | 161      |
| 577914 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-26 00:10:03 +0000 | 152      |
| 577872 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-25 00:10:04 +0000 | 163      |
| 577810 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-24 00:10:04 +0000 | 164      |
| 577766 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-23 00:10:04 +0000 | 155      |
| 577710 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-22 00:10:05 +0000 | 156      |
| 577610 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-21 00:10:04 +0000 | 157      |
8 rows in set

Delete the Schedule

td connector:delete removes the schedule.

Code Block
$ td connector:delete daily_import


Modes for the out plugin

You can specify file import mode in the out section of the load.yml file.


Code Block
  mode: replace

How Incremental Loading works

Incremental loading uses the last imported date of files to load records monotonically, inserting or updating files after the most recent execution.