# The Trade Desk Import Integration Using CLI ## Install the Treasure Data Toolbelt Install the newest [Treasure Data Toolbelt](https://toolbelt.treasuredata.com/) ## Create a Configuration File (load.yml) The configuration file includes an in: section where you specify what comes into the connector from The Trade Desk and an out: section where you specify what the connector puts out to the database in Treasure Data. For more details on available out modes, see the [Appendix.](https://docs.treasuredata.com/articles/project-integrations/the-trade-desk-import-integration+using+CLI#TheTradeDeskImportIntegrationusingCLI-Appendix) The following example shows how to specify import Advertiser. ```yaml in: type: the_trade_desk login_credential: login_credential password: password target: advertiser out: mode: append ``` The following example shows how to specify the import Campaign. ```yaml in: type: the_trade_desk login_credential: login_credential password: password target: campaign advertiser_id: advertiser_id1, advertiser_id2, advertiser_id3 out: mode: append ``` The following example shows how to specify an import Data Group. ```yaml in: type: the_trade_desk login_credential: login_credential password: password target: data_group advertiser_id: advertiser_id1, advertiser_id2, advertiser_id3 out: mode: append ``` The following example shows how to specify import Tracking Tags. ```yaml in: type: the_trade_desk login_credential: login_credential password: password target: tracking_tags advertiser_id: advertiser_id1, advertiser_id2, advertiser_id3 out: mode: append ``` ## Preview the Data to be Imported (Optional) You can preview data to be imported using the command td connector:preview. ``` $ td connector:preview load.yml ``` ## Execute the Load Job You use td connector:issue to execute the job. You must specify the database and table where you want to store the data before you execute the load job.  Ex td_sample_db, td_sample_table ```bash $ td connector:issue load.yml \ --database td_sample_db \ --table td_sample_table \ --time-column date_time_column ``` It is recommended to specify --time-column option, because Treasure Data’s storage is partitioned by time. If the option is not given, the data connector selects the first long or timestamp column as the partitioning time. The type of the column, specified by --time-column, must be either of long or timestamp type (use Preview results to check for the available column name and type. Generally, most data types have a last_modified_date column). If your data doesn’t have a time column, you can add the column by using the add_time filter option. See details at [add_time filter](https://support.treasuredata.com/hc/en-us/articles/360001405587-add-time-filter-plugin-for-Data-Connector) plugin. td connector:issue assumes you have already created a database (sample_db) and a table (sample_table). If the database or the table do not exist in TD, td connector:issue will fail. Therefore, you must create the database and table manually or use --auto-create-table with td connector:issue to automatically create the database and table. ```bash $ td connector:issue load.yml \ --database td_sample_db \ --table td_sample_table \ --time-column date_time_column \ --auto-create-table ``` From the command line, submit the load job. Processing might take a couple of hours depending on the data size. # Appendix ## Modes for the out plugin You can specify file import mode in the out section of the load.yml file. The out: section controls how data is imported into a Treasure Data table. For example, you may choose to append data or replace data in an existing table in Treasure Data. Output modes are ways to modify the data as the data is placed in Treasure Data. - **Append** (default): Records are appended to the target table. - **Replace** (available In td 0.11.10 and later): Replaces data in the target table. Any manual schema changes made to the target table remain intact. Examples: ``` in: ... out: mode: append in: ... out: mode: replace ``` ## Invalid ID error handling When users input a string of invalid ID (Advertiser IDs), the first invalid ID is detected and an error message is shown. If an invalid ID is not the first ID, the invalid ID is not detected and the job runs but invalid IDs are shown in output log.