Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: links and added excerpts where it was easy

You can import Media Campaign, Paid Search Campaign, Site Campaign, User Audience Segment Map, Segment Mapping File or Dissent Lists from Salesforce DMP (Krux) into Arm Treasure Data.

Currently this data connector is in Beta. For more information, contact support@treasuredata.com.

...

Table of Contents
maxLevel3

Prerequisites

  • Basic knowledge of Treasure Data, including the Toolbelt and JavaScript SDK

  • S3 credential with access key id and secret access key.

  • Client name from Salesforce DMP

Integration Overview

This integration has two parts:

  1. Cookie-syncing between Salesforce DMP and Treasure Data CDP: Required to create a mapping between Salesforce DMP ID and Treasure Data ID's td_global_id & td_client_id

  2. Data import from Salesforce DMP into Treasure Data CDP: There are various data feeds that can be brought in. For the purpose of data enrichment, the key file is the mapping between Segment IDs and their names.

    Image Modified


Implement a Cookie-Syncing Tag

You must first set up Treasure Data's JavaScript tag as documented in Getting Started with Website Tracking under "Setting up website tracking and install the Treasure Data JavaScript SDK".

Next, add the following piece of code into the website where Salesforce DMP's tag is already installed.

Code Block
linenumberstrue
(function(window, document, td){

var kruxProperties = {};
for ( var k in window.localStorage ) {
    if ( k.startsWith('<YOUR KRUX PREFIX HERE>') ) {
        kruxProperties[k] = window.localStorage.getItem(k)
    }
}
td.trackEvent('<TD TABLE NAME FOR TRACKING KRUX ID/TD ID map>', kruxProperties);
  
var successCb = function(tdGlobalId) {
  // This is createImage in TDWrapper
  var el = document.createElement('img');
  el.src = '//beacon.krxd.net/usermatch.gif?partner=treasuredata&partner_uid=' + tdGlobalId;
  el.width=1;
  el.height=1;
  el.style.display='none';
  document.body.appendChild(el);
}
  
function isSafari() {
  var ua = window.navigator.userAgent.toLowerCase();
  return ua.indexOf('safari') !== -1 && ua.indexOf('chrome') === -1 && ua.indexOf('edge') === -1;
}

if ( isSafari() ) {
  // TODO: Safari-specific handling due to ITP 2.1
} else {
  td.fetchGlobalID(successCb, function(err) { console.log(err) });
}

})(window, document, td);


The preceding code sample does not include cookie-syncing for Safari browsers. Safari's Intelligent Tracking Prevention (ITP) feature makes 3rd party domain cookie-based visitor identification less reliable. We are actively planning a solution around this.


Use the TD Console to Create Your Connection

Create a New Connection

Go to Integrations Hub > Catalog and search and select Salesforce DMP.

Image Modified

Click Select Create. You are creating an authenticated connection.

The following dialog opens.

Image Modified


Edit the client name, access key id, and secret access key that you retrieved from Salesforce DMP.

Click Select Continue.

Image Modified

Name your new Salesforce DMP Connection. Click Select Done.

Transfer Your Data to Treasure Data

After creating the authenticated connection, you are automatically taken to the Authentications tab. Look for the connection you created and click select New Source.

Specify the data that you want to import:

  • Segment Mapping File

  • User Audience Segment Map

  • Media Campaign, Paid Search Campaign, Site Campaign or Dissent Lists

Import Segment Mapping File

For the Source, choose Segment Mapping File.

Image Modified


Import User Audience Segment Map

For the Source, choose User Audience Segment Map.

Image Modified


Parameters:

  • Import Date: Import data created from this date.

Import Media Campaign, Paid Search Campaign, Site Campaign, Dissent Lists

For the Source, choose Media Campaign, Paid Search Campaign, Site Campaign, or Dissent Lists.

Image Modified

Parameters:

  • Start Date: Import data that has been created since this date.

  • End Date: Import data that has been created up to this date.

  • Incremental Loading: When importing data based on a schedule, the time window of the fetched data automatically shifts forward on each run. For example, if you specify the initial start date as January 1 and end date as January 10, the first run fetches data from January 1 to January 10, the second run fetches from January 11 to January 20, and so on.

Preview

Excerpt Include
Data Preview
Data Preview

You’ll see a preview of your data. To make changes, click Advanced Settings, otherwise click Next.

...

nopaneltrue


Advanced Settings

Image Modified

You can specify the following parameters:

  • Maximum retry times. Specifies the maximum retry times for each API call.

    Code Block
    linenumberstrue
      Type: number
      Default: 7
    


  • Initial retry interval millisecond. Specifies the wait time for the first retry.

    Code Block
    linenumberstrue
      Type: number
      Default: 1000
    


  • Maximum retry interval milliseconds. Specifies the maximum time between retries.

    Code Block
    linenumberstrue
      Type: number
      Default: 120000
    


Choose the Target Database and Table

Choose existing ones or create a new database and table.

Image Modified


Create a new database and give your database a name. Complete similar steps for Create new table.

...

If you want to set a different partition key seed rather than use the default key, you can specify one using the popup menu.

Scheduling

In the When tab, you can specify a one-time transfer, or schedule an automated recurring transfer.

...

  • Once now: set one time job.

  • Repeat…

    • Schedule: accepts these three options: @hourly, @daily and @monthly and custom cron.

    • Delay Transfer: add a delay of execution time.

  • TimeZone: supports extended timezone formats like ‘Asia/Tokyo’.

    Image Modified


Details

Name your Transfer and click select Done to start.

Image Modified

After your transfer has run, you can see the results of your transfer in the Databases tab.

Use the Command Line to create your Salesforce DMP connection

You can use the TD Console to configure your connection.

Install the Treasure Data Toolbelt

Install the newest TD Toolbelt.

Create a Configuration File (load.yml)

The configuration file includes an in: section where you specify what comes into the connector from Salesforce DMP and an out: section where you specify what the connector puts out to the database in Treasure Data. For more details on available out modes, see the Appendix.

The following example shows how to specify import Media Campaign, without incremental scheduling.

Code Block
linenumberstrue
in:  type: krux_dmp  access_key_id: xxxxxxxxxxx  secret_access_key: xxxxxxxxxxx  client_name: xxxxxxxxxxx  target: mc  start_date: 2019-01-17  end_date: 2019-01-27  incremental: falseout: mode: append

...

The following example shows how to specify import Media Campaign, with incremental scheduling.

Code Block
linenumberstrue
in:  type: krux_dmp  access_key_id: xxxxxxxxxxx  secret_access_key: xxxxxxxxxxx  client_name: xxxxxxxxxxx  target: mc  start_date: 2019-01-17  end_date: 2019-01-27  incremental: trueout: mode: append

...

The following example shows how to specify import Paid Search Campaign, without incremental scheduling.

Code Block
linenumberstrue
in:  type: krux_dmp  access_key_id: xxxxxxxxxxx  secret_access_key: xxxxxxxxxxx  client_name: xxxxxxxxxxx  target: psc  start_date: 2019-01-17  end_date: 2019-01-27  incremental: falseout: mode: append

...

The following example shows how to specify import Paid Search Campaign, with incremental scheduling.

Code Block
linenumberstrue
in:  type: krux_dmp  access_key_id: xxxxxxxxxxx  secret_access_key: xxxxxxxxxxx  client_name: xxxxxxxxxxx  target: psc  start_date: 2019-01-17  end_date: 2019-01-27  incremental: trueout: mode: append

...

The following example shows how to specify import Site Campaign, without incremental scheduling.

Code Block
linenumberstrue
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: sc start_date: 2019-01-17 end_date: 2019-01-27 incremental: falseout: mode: append

...

The following example shows how to specify import Site Campaign, with incremental scheduling.

Code Block
linenumberstrue
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: sc start_date: 2019-01-17 end_date: 2019-01-27 incremental: trueout: mode: append

...

The following example shows how to specify import Dissent Lists, without incremental scheduling.

Code Block
linenumberstrue
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: dl start_date: 2019-01-17 end_date: 2019-01-27 incremental: falseout: mode: append

...

The following example shows how to specify import Dissent Lists, with incremental scheduling.

Code Block
linenumberstrue
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: dl start_date: 2019-01-17 end_date: 2019-01-27 incremental: trueout: mode: append

...

The following example shows how to specify import User Audience Segment Map.

Code Block
linenumberstrue
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: uasm import_date: 2019-01-17out: mode: append

...

The following example shows how to specify import Segment Mapping File.

Code Block
linenumberstrue
in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: smfout: mode: append

Preview the Data to be Imported (Optional)

You can preview data to be imported using the command td connector:preview.

Code Block
linenumberstrue
$ td connector:preview load.yml 

Execute the Load Job

You use td connector:issue to execute the job.

You must specify the database and table where you want to store the data before you execute the load job. Ex td_sample_db, td_sample_table

Code Block
linenumberstrue
$ td connector:issue load.yml \      --database td_sample_db \      --table td_sample_table \      --time-column date_time_column

It is recommended to specify --time-column option , because Treasure Data’s storage is partitioned by time. If the option is not given, the data connector selects the first long or timestamp column as the partitioning time. The type of the column, specified by --time-column, must be either of long or timestamp type (use Preview results to check for the available column name and type. Generally, most data types have a last_modified_date column).

If your data doesn’t have a time column, you can add the column by using the add_time filter option. See details at add_time filter plugin.

The td connector:issue assumes you have already created a database (sample_db) and a table (sample_table). If the database or the table do does not exist in TD, td connector:issue will fail. Therefore, you must create the database and table manually or use --auto-create-table with td connector:issue to automatically create the database and table.

Code Block
linenumberstrue
 $ td connector:issue load.yml \       --database td_sample_db \       --table td_sample_table \       --time-column date_time_column \      --auto-create-table

From the command line, submit the load job. Processing might take a couple of hours depending on the data size.

Scheduled

...

Running of the Integration

You can schedule periodic data connector execution for periodic Media Campaign, Paid Search Campaign, Site Campaign import. We configure our scheduler carefully to ensure high availability. By using this feature, you no longer need a cron daemon on your local data center.

...

  • incremental This configuration is used to control the load mode, which governs how the data connector fetches data from Salesforce DMP based on one of the native timestamp fields associated with each object.

  • columns This configuration is used to define a custom schema for data to be imported into Treasure Data. You can define only columns that you are interested in here but make sure they exist in the object that you are fetching. Otherwise, these columns aren’t available in the result.

  • last_record This configuration is used to control the last record from the previous load job. It requires the object to include a key for the column name and a value for the column’s value. The key needs to match the Salesforce DMP Data column name.

See How Incremental Loading works for details and examples.

Create the

...

Schedule

A new schedule can be created using the td connector:create command. The name of the schedule, cron-style schedule, the database and table where their data will be stored, and the data connector configuration file are required.

The `cron` parameter accepts these options: `@hourly`, `@daily`, and `@monthly`.

By default, the schedule is

setup

set up in the UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The `--timezone` option only supports extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles', etc. Timezone abbreviations like PST, CST are *not* supported and may lead to unexpected schedules.


Code Block
linenumberstrue
$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml

It’s also recommended to specify the --time-column option , since because Treasure Data’s storage is partitioned by time.

Code Block
linenumberstrue
$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml \
    --time-column created_at

List the Schedules

You can see the list of scheduled entries by entering the command td connector:list.

Code Block
linenumberstrue
$ td connector:list
+--------------+------------+----------+-------+--------------+-----------------+--------------------------------------------+
| Name         | Cron       | Timezone | Delay | Database     | Table           | Config                                     |
+--------------+------------+----------+-------+--------------+-----------------+--------------------------------------------+
| daily_import | 10 0 * * * | UTC      | 0     | td_sample_db | td_sample_table | {"in"=>{"type"=>"krux_dmp",  |
+--------------+------------+----------+-------+--------------+-----------------+--------------------------------------------+

Show the Schedule Settings and History of Schedules

td connector:show shows the execution setting of a schedule entry.

Code Block
linenumberstrue
% td connector:show daily_import
Name     : daily_import
Cron     : 10 0 * * *
Timezone : UTC
Delay    : 0
Database : td_sample_db
Table    : td_sample_table
Config
---in: type: krux_dmp access_key_id: xxxxxxxxxxx secret_access_key: xxxxxxxxxxx client_name: xxxxxxxxxxx target: mc start_date: 2019-01-17 end_date: 2019-01-27 incremental: true

td connector:history shows the execution history of a schedule entry. To investigate the results of each individual execution, use td job <jobid>.

Code Block
linenumberstrue
% td connector:history daily_import
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
| JobID  | Status  | Records | Database     | Table           | Priority | Started                   | Duration |
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
| 578066 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-28 00:10:05 +0000 | 160      |
| 577968 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-27 00:10:07 +0000 | 161      |
| 577914 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-26 00:10:03 +0000 | 152      |
| 577872 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-25 00:10:04 +0000 | 163      |
| 577810 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-24 00:10:04 +0000 | 164      |
| 577766 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-23 00:10:04 +0000 | 155      |
| 577710 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-22 00:10:05 +0000 | 156      |
| 577610 | success | 10000   | td_sample_db | td_sample_table | 0        | 2019-03-21 00:10:04 +0000 | 157      |
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
8 rows in set

Delete the Schedule

td connector:delete removes the schedule.

Code Block
linenumberstrue
$ td connector:delete daily_import

Appendix

Modes for the out plugin

You can specify file import mode in the out section of the load.yml file.

The out: section controls how data is imported into a Treasure Data table.
For example, you may choose to append data or replace data in an existing table in Treasure Data.

Output modes are ways to modify the data as the data is placed in Treasure Data.

  • Append (default): Records are appended to the target table.

  • Replace (available In td 0.11.10 and later): Replaces data in the target table. Any manual schema changes made to the target table remain intact.

Examples:

Code Block
in:
  ...
out:
  mode: append

Code Block
in:
  ...
out:
  mode: replace


Excerpt Include
Import Modes for the Out Section of the Load.yml File
Import Modes for the Out Section of the Load.yml File
nopaneltrue


How Incremental Loading works

Incremental loading uses the last imported date of files to load records monotonically, inserting or updating files after the most recent execution.

At the first execution, this connector loads all files matching the Filename Regex and Modified After. If incremental : true is set, the latest modified DateTime will be saved as a new Modified After value.

Example:

  • Import folder contains files:

    Code Block
    linenumberstrue
    +--------------+--------------------------+
    |   Filename   |     Last update          |
    +--------------+--------------------------+
    | File0001.csv | 2019-05-04T10:00:00.123Z |
    | File0011.csv | 2019-05-05T10:00:00.123Z |
    | File0012.csv | 2019-05-06T10:00:00.123Z |
    | File0013.csv | 2019-05-07T10:00:00.123Z |
    | File0014.csv | 2019-05-08T10:00:00.123Z | 
        


  • Filename Regex: File001.*.csv

  • Modified After: 2019-05-01T10:00:00.00Z

Then the files: File0011.csv, File0012.csv, File0013.csv, and File0014.csv are imported as they match the Filename Regex, and all having the last update > 2019-05-01T10:00:00.00Z.

...

At the next execution, only files having the last update > 2019-05-08T10:00:00.123Z are imported.

...

  • Import folder has newly updated and added files:

    Code Block
    linenumberstrue
    +--------------+--------------------------+
    |   Filename   |     Last update          |
    +--------------+--------------------------+
    | File0001.csv | 2019-05-04T10:00:00.123Z |
    | File0011.csv | 2019-05-05T10:00:00.123Z |
    | File0012.csv | 2019-05-06T10:00:00.123Z |
    | File0013.csv | 2019-05-09T10:00:00.123Z |
    | File0014.csv | 2019-05-08T10:00:00.123Z | 
    | File0015.csv | 2019-05-09T10:00:00.123Z | 
        


  • Filename Regex: File001.*.csv

  • Modified After: 2019-05-08T10:00:00.123Z

...