# SFTP V2 Server Import Integration The Data Connector for `SFTP_V2` enables you to import files stored on your SFTP server to Treasure Data. ## Prerequisites - Basic knowledge of Treasure Data. - Before using this integration, determine valid protocols for your environment. If you intend to use *SFTP*, you can use this integration for SFTP. If using *FTP*/*FTPS*, try connecting with the [FTP Import Integration](/int/ftp-server-import-integration). - Check your accepted IP range and port if you are using a firewall. Server administrators sometimes change the default port number from TCP 22 for security reasons. - "PuTTY" and other formats are not supported. ## Limitations and Supported - Support only the STORED and DEFLATE compression methods. - Multi-part gzip file may not work. ## Static IP Address of Treasure Data Integration If your security policy requires IP whitelisting, you must add Treasure Data's IP addresses to your allowlist to ensure a successful connection. Please find the complete list of static IP addresses, organized by region, at the following link: [https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/](https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/) ## Use the TD Console to Create Your Connection ### Create a New Connection In Treasure Data, you must create and configure the data connection prior to running your query. As part of the data connection, you provide authentication to access the integration. 1. Open **TD Console**. 2. Navigate to **Integrations Hub** >  **Catalog**. 3. Search for and select SFTP_V2. ![](/assets/image2021-9-17_23-49-47.29a282eeb76e1632f676d6e4277e50d179e81310b1a0fd3eef456bd764f48466.8d4637e9.png) 1. Select **Create Authentication**. The following dialog opens. Edit the parameters. Select **Continue**. ![](/assets/image2021-9-17_23-53-17.c0b0922f8f427a0a6bbf7817123a43acc9ca8e12aba66472879f34e594e1c64f.8d4637e9.png) | Parameters | Description | | --- | --- | | **Host** | The host information of the remote SFTP instance, for example, an IP address. | | **Port** | The connection port on the remote SFTP instance, the default is 22. | | **User** | The user name used to connect to the remote SFTP instance. | | **Authentication mode** | The way you choose to authenticate with your SFTP server. | | **Secret key file** | Required if 'public/private key pair' is selected from `Authentication Mode`. (RSA, DSS, ECDSA, and ED25519 are supported.) | | **Passphrase for secret key file** | (Optional) If required, provide a passphrase for the provided secret file. | | **Retry limit** | The number of times to retry a failed connection (default 10). | | **Timeout** | Connection timeout in seconds (default 600). | 1. Enter a name for your connection. 2. Choose to share the authentication with others or not. 3. Select **Continue.** ## Transfer Your Data to Treasure Data After creating the authenticated connection, you are automatically taken to Authentications. 1. Search for the connection you created. 2. Select **New Source**. 3. Type a name for your **Source** in the Data Transfer Name field. ![](/assets/image2021-9-18_0-9-29.0b72284e0831e9e9b4ccbf4093d2e06b7f7b3a5597c340ad1420d6998b9ae741.8d4637e9.png) 4. Select **Next**. ![](/assets/image2021-9-18_0-12-21.cc2a19e06088fb2adcb88ab7f9943ee81469e4fa3eed312dd611bea7504b5ee2.8d4637e9.png) 5. Edit the following parameters: | Parameters | Description | | --- | --- | | User directory root | Check if the path prefix is under the user directory Ex: /home/test_user | | Path prefix | Prefix of target files, and it must point to a folder (string, required). Unlike with SFTP v1, the path prefix has to be a folder path. If a partial file name is included in the file path, you will receive an `invalid path_prefix:xxx` error message. If you are migrating from SFTP v1 to SFTP v2 Import, note that path_prefix in v2 does not behave the same way as v1. For example, unlike SFTP v.1, the path prefix must be a folder path. | | Path match pattern | Type a regular expression to query file paths. If a file path doesn't match the specified pattern, the file is skipped. For example, if you specify the pattern `.csv$`, then a file is skipped if its path doesn't match the pattern. | | Incremental | Enables incremental loading (boolean, optional. default: true). If incremental loading is enabled, the config diff for the next execution will include last_path parameter so that the next execution skips files before the path. Otherwise, last_path is not included. | | Start after path | Only paths lexicographically greater than this will be imported. | 1. Select **Next**. The Data Settings page can be modified for your needs or you can skip the page. ![](/assets/image2021-9-18_0-17-24.f0bf571ba19bc2d69174a85a689b6c484e762759abf446ea696f96ba5cc04b5d.8d4637e9.png) ![](/assets/image2023-08-17.8bf4488f6a24c023564fb641a83d36fe324d065e65988518165f784c5ebd1520.8d4637e9.png) ### Data Preview You can see a [preview](/products/customer-data-platform/integration-hub/batch/import/previewing-your-source-data) of your data before running the import by selecting Generate Preview. Data preview is optional and you can safely skip to the next page of the dialog if you choose to. 1. Select **Next**. The Data Preview page opens. 2. If you want to preview your data, select **Generate Preview**. 3. Verify the data. ### Data Placement For data placement, select the target database and table where you want your data placed and indicate how often the import should run. 1. Select **Next.** Under Storage, you will create a new or select an existing database and create a new or select an existing table for where you want to place the imported data. 2. Select a **Database** > **Select an existing** or **Create New Database**. 3. Optionally, type a database name. 4. Select a **Table**> **Select an existing** or **Create New Table**. 5. Optionally, type a table name. 6. Choose the method for importing the data. - **Append** (default)-Data import results are appended to the table. If the table does not exist, it will be created. - **Always Replace**-Replaces the entire content of an existing table with the result output of the query. If the table does not exist, a new table is created. - **Replace on New Data**-Only replace the entire content of an existing table with the result output when there is new data. 7. Select the **Timestamp-based Partition Key** column. If you want to set a different partition key seed than the default key, you can specify the long or timestamp column as the partitioning time. As a default time column, it uses upload_time with the add_time filter. 8. Select the **Timezone** for your data storage. 9. Under **Schedule**, you can choose when and how often you want to run this query. #### Run once 1. Select **Off**. 2. Select **Scheduling Timezone**. 3. Select **Create & Run Now**. #### Repeat Regularly 1. Select **On**. 2. Select the **Schedule**. The UI provides these four options: *@hourly*, *@daily* and *@monthly* or custom *cron*. 3. You can also select **Delay Transfer** and add a delay of execution time. 4. Select **Scheduling Timezone**. 5. Select **Create & Run Now**. After your transfer has run, you can see the results of your transfer in **Data Workbench** > **Databases.** # Import with SFTP via TD Workflow Create and run a workflow ```yaml _export: td: database: workflow_sftp_v2    table: workflow_sftp_v2 +import_from_sftp_v2: td_load>: imports/seed.yml  database: ${td.database} table: ${td.table} ``` Modify the *seed.yml* file with your SFTP connection details for the import. ```yaml in: type: sftp_v2 host: HOST port: auth_method: key_pair  user: USER secret_key_file: content: | -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC... ... -----END RSA PRIVATE KEY----- secret_key_passphrase: PASSPHRASE user_directory_is_root: true timeout: 600 path_prefix: /path/to/sample parser: skip_header_lines: 1 charset: UTF-8 newline: CRLF type: csv delimiter: ',' quote: '"' columns: - {name: id, type: long} - {name: account, type: long} - {name: time, type: timestamp, format: "%Y-%m-%d %H:%M:%S"} - {name: purchase, type: timestamp, format: "%Y%m%d"} - {name: comment, type: string} - {name: json_column, type: json} out: mode: append ``` | Configuration Parameters | Value | | --- | --- | | host: | (string, required) | | port: | (string, default: 22) | | auth_method: | (string ['password', 'key_pair'], required) | | user: | (string, required) | | password: | (string, default: null) | | secret_key_file: | (string, default: null). OpenSSH format is required. | | secret_key_passphrase: | (string, default: "") | | user_directory_is_root: | (boolean, default: true) | | timeout: sftp connection timeout seconds | (integer, default: 600) | | path_prefix: Prefix of output paths | (string, required) | | incremental: enables incremental loading | (boolean, optional. default: true). If incremental loading is enabled, config diff for the next execution will include last_path parameter so that next execution skips files before the path. Otherwise, last_path will not be included. | | path_match_pattern: | regexp to match file paths. If a file path doesn't match with this pattern, the file will be skipped (regexp string, optional) | | total_file_count_limit: | maximum number of files to read (integer, optional) | | min_task_size (experimental): | minimum size of a task. If this is larger than 0, one task includes multiple input files. This is useful if too many number of tasks impacts performance of output or executor plugins badly. (integer, optional) | # Import with SFTP via the CLI (TD Toolbelt) ## Install TD Toolbelt Install the most current [Treasure Data Toolbelt](https://toolbelt.treasuredata.com/). ``` $ td --version ``` ## Create Seed Config File (seed.yml) Prepare *seed.yml,* as shown in the following example, with your SFTP_v2 details. We support two authentication methods: Public / Private Key Pair and Password. ### Public and Private Key Pair Authentication Create *seed.yml* with the following content. ```yaml in: type: sftp_v2 host: HOST port:  auth_method: key_pair  user: USER secret_key_file: content: | -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC... ... -----END RSA PRIVATE KEY----- secret_key_passphrase: PASSPHRASE user_directory_is_root: true timeout: 600 path_prefix: /path/to/sample out: mode: append exec: {} ``` `secret\_key\_file` requires OpenSSH format. ### Password Authentication Create *seed.yml* with the following content. ```yaml in: type: sftp_v2 host: HOST port: auth_method: password user: USER password: PASSWORD user_directory_is_root: true timeout: 600 path_prefix: /path/to/sample out: mode: append exec: {} ``` You can use the following special characters in the password: "#$!*@" The SFTP_v2 integration imports all files that match the specified prefix. path_prefix must point to a file or folder (e.g. path_prefix: `path/to/sample`–> `path/to/sample/201501.csv.gz`, `path/to/sample/201502.csv.gz`, …, `path/to/sample/201505.csv.gz`). ## Guess Fields (Generate load.yml) Use *connector:guess*. This command automatically reads the source file and assesses (uses logic to guess) the file format. ``` $ td connector:guess seed.yml -o load.yml ``` If you open *load.yml*, you see the guessed file format definitions, including file formats, encodings, column names, and types. This example is trying to load CSV files. ```yaml in: type: sftp_v2 host: HOST port: auth_method: key_pair  user: USER secret_key_file: content: | -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC... ... -----END RSA PRIVATE KEY----- secret_key_passphrase: PASSPHRASE user_directory_is_root: true timeout: 600 path_prefix: /path/to/sample parser: skip_header_lines: 1 charset: UTF-8 newline: CRLF type: csv delimiter: ',' quote: '"' columns: - {name: id, type: long} - {name: account, type: long} - {name: time, type: timestamp, format: "%Y-%m-%d %H:%M:%S"} - {name: purchase, type: timestamp, format: "%Y%m%d"} - {name: comment, type: string} - {name: json_column, type: json} out: mode: append exec: {} ``` Then, you can preview how the system will parse the file by using the *preview* command. ``` td connector:preview load.yml ``` The guess command needs over 3 rows and 2 columns in source data file, because it guesses column definition using sample rows from source data. If the system detects your column name or column type unexpectedly, modify *load.yml* directly and preview again. The integration supports parsing of “boolean”, “long”, “double”, “string”, and “timestamp” types. You also must create a database and table prior to executing the data load job. Follow these steps: ```bash td database:create td_sample_db td table:create td_sample_db td_sample_table ``` ## Execute Load Job Submit the load job. It may take a couple of hours, depending on the size of the data. Specify the Treasure Data database and table where the data should be stored. It’s also recommended to specify *--time-column* option, because Treasure Data’s storage is partitioned by time (see [data partitioning](https://docs.treasuredata.com/smart/project-product-documentation/data-partitioning-in-treasure-data)) If the option is not provided, the integration chooses the first *long* or *timestamp* column as the partitioning time. The type of the column specified by *--time-column* must be either of *long* and *timestamp* type. If your data doesn’t have a time column, you can add a time column by using *add_time* filter option. For more details, see [add_time filter plugin](https://docs.treasuredata.com/smart/project-product-documentation/add_time-filter-function). ```bash td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at ``` The connector:issue command assumes that you have already created a *database(td_sample_db)*and a *table(td_sample_table)*. If the database or the table do not exist in TD, the connector:issue command fails. If this happens, [create the database](https://docs.treasuredata.com/smart/project-product-documentation/creating-or-viewing-a-database) and [create a table](https://docs.treasuredata.com/smart/project-product-documentation/creating-or-viewing-tables) manually, or use *--auto-create-table* option with *td connector:issue* command to auto create the database and table: ```bash td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at --auto-create-table ``` The integration does not sort records on server-side. To use time-based partitioning effectively, sort records beforehand. If you have a field called `time`, you don't have to specify the `--time-column` option. ```bash td connector:issue load.yml --database td_sample_db --table td_sample_table ``` ## Scheduled Execution You can schedule periodic integration execution for incremental SFTP_v2 file import. We configure our scheduler carefully to ensure high availability. Using this feature means you no longer need a *cron*daemon on your local data center. For the scheduled import, the integration for SFTP_v2 imports all files that match with the specified prefix (e.g. path_prefix: `path/to/sample` –> `path/to/sample/201501.csv.gz`, `path/to/sample/201502.csv.gz`, …, `path/to/sample/201505.csv.gz`) at first and remembers the last path (`path/to/sample/201505.csv.gz`) for the next execution. On the second and on subsequent runs, it imports only files that come after the last path in alphabetical (lexicographic) order. (`path/to/sample/201506.csv.gz`, …) ### Create the Schedule A new schedule can be created using the *td connector:create* command. The following are required: the name of the schedule, the cron-style schedule, the database and table where the data will be stored, and the integration configuration file. ```bash td connector:create \ daily_import \ "10 0 * * *" \ td_sample_db \ td_sample_table \ load.yml ``` It's also recommended to specify the *--time-column* option, because Treasure Data’s storage is partitioned by time. ```bash td connector:create \ daily_import \ "10 0 * * *" \ td_sample_db \ td_sample_table \ load.yml \ --time-column created_at ``` The `cron` parameter also accepts three special options: `@hourly`, `@daily` and `@monthly`. By default, the schedule is set up in UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The `--timezone` option supports only extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles', etc. Timezone abbreviations like PST, and CST are *not* supported and may lead to unexpected schedules. ### List the Schedules You can see the list of currently scheduled entries by running the command *td connector:list*. ``` td connector:list ``` ### Show the Setting and Schedule History *td connector:show* shows the execution setting of a schedule entry. ``` td connector:show daily_import ``` *td connector:history* shows the execution history of a schedule entry. To investigate the results of each individual run, use *td job jobid*. ```bash td connector:history daily_import ``` ### Delete the Schedule *td connector:delete* will remove the schedule. ```bash td connector:delete daily_import ``` ## Modes for out plugin You can specify the file import mode in the *out* section of seed.yml. ### append (default) This is the default mode, and records are appended to the target table. ```yaml in: ... out: mode: append ``` ### replace (In td 0.11.10 and later) This mode replaces data in the target table. Note that any manual schema changes made to the target table will remain intact with this mode. ``` in: ... out: mode: replace ``` # Import from SFTP Server via Workflow For sample workflows of importing files from your STFP server, view [Treasure Boxes](https://github.com/treasure-data/treasure-boxes/tree/master/td_load/sftp).