The Data Connector for SFTP enables you to import files stored on your SFTP server to Arm Treasure Data.

The Data Connector for SFTP enables you to import files stored on your SFTP server to Arm Treasure Data.

Prerequisites

Use the TD Console to create your connection

You can use TD console to create your data connector.

Create a new connection

When you configure a data connection, you provide authentication to access the integration. In Treasure Data, you configure the authentication and then specify the source information.

Go to Catalog and search and select SFTP. Click Create.

Enter the required credentials for your remote SFTP instance. Set the following parameters.

Click Continue after entering the required connection details. Name the connection so you can easily find it later should you need to modify any of the connection details. Note: If you would like to share this connection with other users in your organization, check the Share with otherscheckbox. If this box is unchecked this connection is visible only to you.

Click Create Authentication to complete the connection. If the connection is a success, then the connection you just created appears in your list of authentications with the name you provided.

Transfer data into Treasure Data.

Now that you have created the connection to your remote SFTP instance, the next step is getting the data from your SFTP server into Treasure Data. You can set up an ad hoc one time transfer or a recurring transfer at a regular interval. In this section, you specify source details as described in the following steps.

Enter SFTP Server details.

Provide the details of the database and table that you want to ingest data from.

Click Next to preview the data in the next step.

Preview

If there are no errors with the connection, you see a preview of the data to be imported. If you are unable to see the preview, read more. If you have issues viewing the preview, contact support.

The preview command will download one file from the specified bucket and display the results from that file. This may cause a difference in results from the preview and issue commands.

If you need to use non-standard options for your import, select Advanced Settings.

Advanced Settings

Advanced Settings allow to you modify aspects of your transfer to allow for special requirements.

Transfer to

Select the Treasure Data target database and table that you want to import your data to. You can create a new database or table using the Create new database or Create new table check boxes.

Data Transfer Frequency

You can choose to run the transfer only one time or schedule it to run on a given frequency of your choosing.

After selecting the frequency, click Start Transfer to begin the transfer. If there are no errors, the transfer into Treasure Data will complete and the data will be available. Jobs are kicked off when a transfer runs. You can used the Jobs or the My Input Transfers section to monitor the progress of your data transfer.

My Input Transfers

If you need to review the transfer you have just completed for other data connector jobs, you can view a list of your transfers in the My Input Transfers section. You can also edit the details of data transfers.You can use the Jobs or the Sources section to monitor the progress of your data transfer.

Appendix

A) Optional Alternative: Use the CLI to Configure the Connector

You can also use the FTP data connector from the command line interface. The following instructions show you how to import data using the CLI.

Install ‘td’ command v0.11.9 or later

Install the most current Treasure Data Toolbelt.

$ td --version
0.11.10

Create Seed Config File (seed.yml)

Prepare seed.yml as shown in the following example, with your SFTP details. We support two authentication methods: Public / Private Key Pair, and Password.

Case 1: Public / Private Key Pair Authentication

Create seed.yml with the following content.

in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  user: <USER>
  secret_key_file:
    content: |
      -----BEGIN RSA PRIVATE KEY-----
      Proc-Type: 4,ENCRYPTED
      DEK-Info: AES-128-CBC...
      ...
      -----END RSA PRIVATE KEY-----
  secret_key_passphrase: <PASSPHRASE>
  user_directory_is_root: true
  timeout: 600
  path_prefix: /path/to/sample
out:
  mode: append
  exec: {}

`secret_key_file` requires OpenSSH format.

Case 2: Password Authentication

Create seed.yml with the following content.

in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  auth_method: password
  user: <USER>
  password: <PASSWORD>
  user_directory_is_root: true
  timeout: 600
  path_prefix: /path/to/sample
out:
  mode: append
  exec: {}

You can use the following special characters in the password: "#$!*@"

If you are using the proxy, add the additional information as shown:

in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  ....
  proxy:
    type: http
    host: <PROXY_HOST>
    port: <PROXY_PORT>
    user: <PROXY_USER>
    password: <PROXY_PASSWORD>
    command: <SOMETHING COMMAND IF NEEDED>

The Data Connector for SFTP imports all files that match the specified prefix (e.g. path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz).

For more details on available out modes, see Appendix B.

Guess Fields (Generate load.yml)

Use connector:guess. This command automatically reads the source file, and assesses (uses logic to guess) the file format.

$ td connector:guess seed.yml -o load.yml

If you open load.yml, you’ll see the guessed file format definitions including file formats, encodings, column names, and types. This example is trying to load csv files.

in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  user: <USER>
  secret_key_file:
    content: |
      -----BEGIN RSA PRIVATE KEY-----
      Proc-Type: 4,ENCRYPTED
      DEK-Info: AES-128-CBC...
      ...
      -----END RSA PRIVATE KEY-----
  secret_key_passphrase: <PASSPHRASE>
  user_directory_is_root: true
  timeout: 600
  path_prefix: /path/to/sample
  parser:
    skip_header_lines: 1
    charset: UTF-8
    newline: CRLF
    type: csv
    delimiter: ','
    quote: '"'
    columns:
    - {name: id, type: long}
    - {name: account, type: long}
    - {name: time, type: timestamp, format: "%Y-%m-%d %H:%M:%S"}
    - {name: purchase, type: timestamp, format: "%Y%m%d"}
    - {name: comment, type: string}
    - {name: json_column, type: json}
out:
  mode: append
  exec: {}

Then, you can preview how the system will parse the file by using the preview command.

$ td connector:preview load.yml
+-------+---------+----------+---------------------+
| id    | company | customer | created_at          |
+-------+---------+----------+---------------------+
| 11200 | AA Inc. |    David | 2015-03-31 06:12:37 |
| 20313 | BB Imc. |      Tom | 2015-04-01 01:00:07 |
| 32132 | CC Inc. | Fernando | 2015-04-01 10:33:41 |
| 40133 | DD Inc. |    Cesar | 2015-04-02 05:12:32 |
| 93133 | EE Inc. |     Jake | 2015-04-02 14:11:13 |
+-------+---------+----------+---------------------+

The guess command needs over 3 rows and 2 columns in source data file, because it guesses column definition using sample rows from source data.

If the system detects your column name or column type unexpectedly, modify load.yml directly and preview again.

Currently, the Data Connector supports parsing of “boolean”, “long”, “double”, “string”, and “timestamp” types.

You also must create a local database and table prior to executing the data load job. Follow these steps:

$ td database:create td_sample_db
$ td table:create td_sample_db td_sample_table

Execute Load Job

Submit the load job. It may take a couple of hours depending on the size of the data. Specify the Treasure Data database and table where the data should be stored.

It’s also recommended to specify --time-column option, because Treasure Data’s storage is partitioned by time (see data partitioning) If the option is not provided, the data connector chooses the first long or timestamp column as the partitioning time. The type of the column specified by --time-column must be either of long and timestamp type.

If your data doesn’t have a time column you can add a time column by using add_time filter option. For more details see add_time filter plugin

$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at

The connector:issue command assumes that you have already created a database(td_sample_db)and a table(td_sample_table). If the database or the table do not exist in TD, the connector:issue command fails, so create the database and table manually or use --auto-create-table option with td connector:issue command to auto create the database and table:

$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at --auto-create-table

The Data Connector does not sort records on server-side. To use time-based partitioning effectively, sort records in files beforehand.

If you have a field called `time`, you don't have to specify the `--time-column` option.

$ td connector:issue load.yml --database td_sample_db --table td_sample_table

Scheduled execution

You can schedule periodic Data Connector execution for incremental SFTP file import. We configure our scheduler carefully to ensure high availability. By using this feature, you no longer need a crondaemon on your local data center.

For the scheduled import, the Data Connector for SFTP imports all files that match with the specified prefix (e.g. path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz) at first and remembers the last path (path/to/sample_201505.csv.gz) for the next execution.

On the second and on subsequent runs, it imports only files that comes after the last path in alphabetical (lexicographic) order. (path/to/sample_201506.csv.gz, …)

Create the schedule

A new schedule can be created using the td connector:create command. The following are required: the name of the schedule, the cron-style schedule, the database and table where the data will be stored, and the Data Connector configuration file.

$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml

It’s also recommended to specify the --time-column option, because Treasure Data’s storage is partitioned by time (see data partitioning)

$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml \
    --time-column created_at

The `cron` parameter also accepts three special options: `@hourly`, `@daily` and `@monthly`.

By default, schedule is setup in UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The `--timezone` option supports only extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles' etc. Timezone abbreviations like PST, CST are *not* supported and may lead to unexpected schedules.

List the Schedules

You can see the list of currently scheduled entries by running the command td connector:list.

$ td connector:list
+--------------+--------------+----------+-------+--------------+-----------------+--------------------------------------------+
| Name         | Cron         | Timezone | Delay | Database     | Table           | Config                                     |
+--------------+--------------+----------+-------+--------------+-----------------+--------------------------------------------+
| daily_import | 10 0 * * *   | UTC      | 0     | td_sample_db | td_sample_table | {"in"=>{"type"=>"sftp", "access_key_id"... |
+--------------+--------------+----------+-------+--------------+-----------------+--------------------------------------------+

Show the Setting and Schedule History

td connector:show shows the execution setting of a schedule entry.

% td connector:show daily_import
Name     : daily_import
Cron     : 10 0 * * *
Timezone : UTC
Delay    : 0
Database : td_sample_db
Table    : td_sample_table
Config
---
in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  auth_method: password
  user: <USER>
  password: <PASSWORD>
  path_prefix: /sftp/file/path/prefix
  parser:
    charset: UTF-8
    ...

td connector:history shows the execution history of a schedule entry. To investigate the results of each individual run, use td job <jobid>.

% td connector:history daily_import
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
| JobID  | Status  | Records | Database     | Table           | Priority | Started                   | Duration |
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
| 578066 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-18 00:10:05 +0000 | 160      |
| 577968 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-17 00:10:07 +0000 | 161      |
| 577914 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-16 00:10:03 +0000 | 152      |
| 577872 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-15 00:10:04 +0000 | 163      |
| 577810 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-14 00:10:04 +0000 | 164      |
| 577766 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-13 00:10:04 +0000 | 155      |
| 577710 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-12 00:10:05 +0000 | 156      |
| 577610 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-11 00:10:04 +0000 | 157      |
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
8 rows in set

Delete the Schedule

td connector:delete will remove the schedule.

$ td connector:delete daily_import

B) Modes for out plugin

You can specify file import mode in out section of seed.yml.

append (default)

This is the default mode and records are appended to the target table.

in:
  ...
out:
  mode: append

replace (In td 0.11.10 and later)

This mode replaces data in the target table. Note that any manual schema changes made to the target table will remain intact with this mode.

in:
  ...
out:
  mode: replace

FAQ for the SFTP Data Connector

I can’t connect to my SFTP server, what can I do?

How do I troubleshoot data import problems?

Review the job log. Warning and errors provide information about the success of your import. For example, you can identify the source file names associated with import errors.