Maximize your data resources by using Treasure Data with Google DoubleClick for Publishers (DFP) DFP. 

Google DoubleClick for Publishers is now known as Google Ad Manager

You can create audience lists in your Google DoubleClick for Publishers (DFP) using data held in Treasure Data. Follow these steps to move cookies and Mobile Advertising Identifiers to new or existing audience lists within Google DFP.


  • Basic knowledge of Treasure Data, including the TD Toolbelt

  • A DFP Account 

  • Authorized Treasure Data DMP access to your Google DFP Account

Limitations from Google Ad Manager via Audience Partner API

  • It may take up to 24 hours for updates to audience lists to be visible DFP. Expect to wait up to 24 hours from the time of the query completion for changes to be reflected in DFP.

  • Google Data Platform Policy (Identifying Users and Obtaining User Consent) requires that each segment identifies at least 100 users.

Grant Access for Treasure Data

Treasure Data’s data connector requires permissions to create audience segments in your Google DFP account. Use the Google Contact Us form to reach the DoubleClick for Publishers Support team and request that Treasure Data be granted access to your DFP account. Provide the following information in the form:

  • Request: Grant Treasure Data permissions

  • Your DoubleClick for Publishers account ID (referred to by Google as the Audience Link ID)

  • Treasure Data DMP:

    • Customer ID: 140-996-0635

    • NID: treasuredata_dmp

Instructions on how to find your Audience link ID:

  1. Open Google Ad Manager.

  2. Select Admin > Global settings > All network settings.

  3. Find the Audience link ID.

You are sending information so that Google recognizes Treasure Data and connects your Google DPF account to Treasure Data.

To export data, you create or select an existing connection, create or reuse a query, and then run the query to export your audience lists.

Use the TD Console to Create Your Connection

Create a New Connection

In Treasure Data, you must create and configure the data connection prior to running your query. As part of the data connection, you provide authentication to access the integration.

1. Open TD Console.
2. Navigate to Integrations Hub Catalog.
3. Search for and select Google Ad Manager via Audience Partner API.

4. Select Create Authentication.
5. Type the credentials to authenticate.
6. In the Audience Link ID field, enter the ID that you use in your DFP.

7. Type a name for your connection.
8. Select Done.

Define your Query

Sometimes you need to define the column mapping before writing the query.

Plan to transfer your data at least 24 hours ahead of when you need the audience lists (also referred to as segments) to be in Google AdWords.

Column Mappings

The Google DBM (via DDP) reads data source table by columns and uses the following column name mappings to process each row data:

  • cookie: The encrypted Google ID or Mobile Advertising Identifier that DDP will use in id matching. This column contains the cookie hash or mobile identifier of your users.

  • list_name: This column contains the name of the audience list (segment) that you want to create in your DFP audience. If the list name does not exist in DFP, a new list is created. If the list name exists, the existing list is updated.

  • timestamp (optional): The timestamp (seconds since EPOCH). If this column does not exist or is missing, the Google DDP side adds it automatically. If you rely on the value, we highly recommend specifying the column explicitly.

  • delete (optional): This column contains boolean values (false or true) or numbers (0 or 1) to indicate if the cookie is to be added or removed from the given audience segment. By default, the value will be false if the value is left blank or if the column is not provided.

Optionally Define the Source Column Name Mappings

  1. Define the mapping between Google DDP column names to the output column names that you specify in your query.

  2. Specify the target column.

  3. Specify the source column.

For example, if google_cookie is the identifier column in your TD data source, you should define the mapping as cookie:google_cookie. If the source column in the mapping is missing, the target column name will be used. For example, a cookie is the same as cookie:cookie mapping.

  1. Complete the instructions in Creating a Destination Integration.
  2. Navigate to Data Workbench > Queries.

  3. Select a query for which you would like to export data.

  4. Run the query to validate the result set.

  5. Select Export Results.

  6. Select an existing integration authentication.
  7. Define any additional Export Results details. In your export integration content review the integration parameters.
    For example, your Export Results screen might be different, or you might not have additional details to fill out:
  8. Select Done.

  9. Run your query.

  10. Validate that your data moved to the destination you specified.

Integration Parameters

Cookie or Mobile Identifier Column Header

Specify the original source of the user cookie or mobile identifier.

You must select one of the options:

    • cookie_encrypted: Encrypted identifier (for example, Web), a cookie hash of user-id

    • cookie_idfa: iOS Advertising Identifier

    • cookie_adid: Android Advertising Identifier

    • cookie_epid: cookie externally provided id

Example Query

 SELECT DISTINCT "cookie", "list_name", "time"
 FROM "google_dfp_ddp"

Optionally Schedule the Query Export Jobs

You can use Scheduled Jobs with Result Export to periodically write the output result to a target destination that you specify.

1. Navigate to Data Workbench > Queries.
2. Create a new query or select an existing query.
3. Next to Schedule, select None.

4. In the drop-down, select one of the following schedule options:

Drop-down ValueDescription
Custom cron...

Review Custom cron... details.

@daily (midnight)Run once a day at midnight (00:00 am) in the specified time zone.
@hourly (:00)Run every hour at 00 minutes.
NoneNo schedule.

Custom cron... Details

Cron Value


0 * * * *

Run once an hour.

0 0 * * *

Run once a day at midnight.

0 0 1 * *

Run once a month at midnight on the morning of the first day of the month.


Create a job that has no scheduled run time.

 *    *    *    *    *
 -    -    -    -    -
 |    |    |    |    |
 |    |    |    |    +----- day of week (0 - 6) (Sunday=0)
 |    |    |    +---------- month (1 - 12)
 |    |    +--------------- day of month (1 - 31)
 |    +-------------------- hour (0 - 23)
 +------------------------- min (0 - 59)

The following named entries can be used:

  • Day of Week: sun, mon, tue, wed, thu, fri, sat.

  • Month: jan, feb, mar, apr, may, jun, jul, aug, sep, oct, nov, dec.

A single space is required between each field. The values for each field can be composed of:

Field ValueExampleExample Description

A single value, within the limits displayed above for each field.

A wildcard ‘*’ to indicate no restriction based on the field. 

‘0 0 1 * *’ Configures the schedule to run at midnight (00:00) on the first day of each month.
A range ‘2-5’, indicating the range of accepted values for the field.‘0 0 1-10 * *’ Configures the schedule to run at midnight (00:00) on the first 10 days of each month.
A list of comma-separated values ‘2,3,4,5’, indicating the list of accepted values for the field.

0 0 1,11,21 * *’

Configures the schedule to run at midnight (00:00) every 1st, 11th, and 21st day of each month.
A periodicity indicator ‘*/5’ to express how often based on the field’s valid range of values a schedule is allowed to run.

‘30 */2 1 * *’

Configures the schedule to run on the 1st of every month, every 2 hours starting at 00:30. ‘0 0 */5 * *’ configures the schedule to run at midnight (00:00) every 5 days starting on the 5th of each month.
A comma-separated list of any of the above except the ‘*’ wildcard is also supported ‘2,*/5,8-10’‘0 0 5,*/10,25 * *’Configures the schedule to run at midnight (00:00) every 5th, 10th, 20th, and 25th day of each month.
5.  (Optional) You can delay the start time of a query by enabling the Delay execution.

Execute the Query

Save the query with a name and run, or just run the query. Upon successful completion of the query, the query result is automatically imported to the specified container destination.

Scheduled jobs that continuously fail due to configuration errors may be disabled on the system side after several notifications.

Optionally Configure Export Results in Workflow

Within Treasure Workflow, you can specify the use of this data connector to export data.

Learn more at Using Workflows to Export Data with the TD Toolbelt.


  • No labels