Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Learn more about SFTP Server Export Integration.

The Data Connector for SFTP enables you to import files stored on your SFTP server to Arm Treasure Data.

For sample workflows of importing files from your STFP server, view Treasure Boxes.


expand

Table of Contents
maxLevel
toc
2

Prerequisites

  • Basic knowledge of Treasure Data.

Excerpt
  • Requires that your private key has an OpenSSH 7.8 format.

  • Requires that the OpenSSH format private key was generated using the '-m PEM' option.

  • The default format of the private key after OpenSSH.
  • Before using this connector, determine valid protocols for your environment.
    If you intend to SFTP, you can use this integration for SFTP.
    If FTP/FTPS, try connect with FTP Import Integration.
    • If you are using a firewall, check your accepted IP range/port. Server administrators sometimes change the default port number from TCP/22 for security reasons.

    • “PuTTY” and other formats are not supported.

  • After installation and configuration, review the job log. Warning and errors provide information about the success of your import. For example, you can identify the source file names associated with import errors.

Limitation

  • Support only the STORED and DEFLATE compression methods.
  • Multi-part gzip file may not work

Use the TD Console to

...

Create your

...

Connection

You can use TD console Console to create your data connector.

Create a

...

New Connection

When you configure a data connection, you provide authentication to access the integration. In Treasure Data, you configure the authentication and then specify the source information.

Go to Catalog and search and select SFTP. Click Create.

...

  1. Open TD Console.

  2. Navigate to Integrations Hub > Catalog.

  3. Search and select SFTP.

    Image Added


  4. Select Create 

  5. The following dialog opens.

    Image Added



  6. Enter the required credentials for your remote SFTP instance. Set the following parameters.

    • Host: The host information of the remote SFTP instance, for example an IP address.

    • Port: The connection port on the remote

...

    • SFTP instance, the default is 22.

    • User: The user name used to connect to the remote FTP instance.

    • Authentication mode: The way you choose to authenticate with your SFTP server.

    • Secret key file: Required if 'public / private key pair' is selected from `Authentication Mode`. (

...

    • ecdsa key type is supported.)

    • Passphrase for secret key file: (Optional) If required, provide a passphrase for the provided secret file.

    • Retry limit: Number of times to retry a failed connection (default 10).

    • Timeout: Connection timeout in seconds (default 600).

...

...


  1. Select Continue. Type a name for your connection.

    Image Added


  2. If you would like to share this connection with other users in your organization, check the Share with others checkbox. If this box is unchecked this connection is visible only to you.

...

  1. Select Done.

Transfer

...

Data into Treasure Data

...

To get the data from your SFTP server into Treasure Data. You , you can set up an ad hoc one-time transfer or a recurring transfer at a regular interval. In this section, you specify source details as described in the following steps.

...

Enter SFTP Server details.

Provide the details of the database and table that you want to ingest data from.

  • Path prefix: Prefix of target files (string, required).

  • Incremental enables incremental loading (boolean, optional. default: true. If incremental loading is enabled, the config diff for the next execution will include last_path parameter so that next execution skips files before the path. Otherwise, last_path is not included.

...

Click Next to preview the data in the next step.

Preview

If there are no errors with the connection, you see a preview of the data to be imported. If you are unable to see the preview, read more. If you have issues viewing the preview, contact support.

...

The preview command will download one file from the specified bucket and display the results from that file. This may cause a difference in results from the preview and issue commands.

If you need to use non-standard options for your import, select Advanced Settings.

Advanced Settings

Advanced Settings allow to you modify aspects of your transfer to allow for special requirements.

Transfer to

Select the Treasure Data target database and table that you want to import your data to. You can create a new database or table using the Create new database or Create new table checkboxes.

  • Mode: Append – Allows you to add records into existing table.

  • Mode: Replace – Replace the existing data in the table with the data being imported.

  • Partition key Seed: Choose the long or timestamp column that you would like to use as the partitioning time column. If you do not specify a time column, the upload time of the transfer is used in conjunction with the addition of a time column.

...

Data Transfer Frequency

You can choose to run the transfer only one time or schedule it to run on a given frequency of your choosing.

  • When

    • Once now: Run the transfer only once.

    • Repeat…

      • Schedule: accepts these three options: @hourly, @daily and @monthly and custom cron.

      • Delay Transfer: add a delay of execution time.

    • Scheduling Timezone: Timezone the data is stored in; data will also be displayed in this timezone. Supports extended timezone formats like ‘Asia/Tokyo’.

...

After selecting the frequency, click Start Transfer to begin the transfer. If there are no errors, the transfer into Treasure Data will complete and the data will be available. Jobs are kicked off when a transfer runs. You can used the Jobs or the My Input Transfers section to monitor the progress of your data transfer.

My Input Transfers

If you need to review the transfer you have just completed for other data connector jobs, you can view a list of your transfers in the My Input Transfers section. You can also edit the details of data transfers.You can use the Jobs or the Sources section to monitor the progress of your data transfer.

Appendix

A) Optional Alternative: Use the CLI to Configure the Connector

You can also use the FTP data connector from the command line interface. The following instructions show you how to import data using the CLI.

Install ‘td’ command v0.11.9 or later

Install the most current Treasure Data Toolbelt.

Code Block
$ td --version
0.11.10

Create Seed Config File (seed.yml)

Prepare seed.yml as shown in the following example, with your SFTP details. We support two authentication methods: Public / Private Key Pair, and Password.

Case 1: Public / Private Key Pair Authentication

Create seed.yml with the following content.

Code Block
in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  user: <USER>
  secret_key_file:
    content: |
      -----BEGIN RSA PRIVATE KEY-----
      Proc-Type: 4,ENCRYPTED
      DEK-Info: AES-128-CBC...
      ...
      -----END RSA PRIVATE KEY-----
  secret_key_passphrase: <PASSPHRASE>
  user_directory_is_root: true
  timeout: 600
  path_prefix: /path/to/sample
out:
  mode: append
  exec: {}

`secret_key_file` requires OpenSSH format.

Case 2: Password Authentication

Create seed.yml with the following content.

Code Block
in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  auth_method: password
  user: <USER>
  password: <PASSWORD>
  user_directory_is_root: true
  timeout: 600
  path_prefix: /path/to/sample
out:
  mode: append
  exec: {}

You can use the following special characters in the password: "#$!*@"

If you are using the proxy, add the additional information as shown:

Code Block
in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  ....
  proxy:
    type: http
    host: <PROXY_HOST>
    port: <PROXY_PORT>
    user: <PROXY_USER>
    password: <PROXY_PASSWORD>
    command: <SOMETHING COMMAND IF NEEDED>

The Data Connector for SFTP imports all files that match the specified prefix (e.g. path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz).

For more details on available out modes, see Appendix B.

Guess Fields (Generate load.yml)

Use connector:guess. This command automatically reads the source file, and assesses (uses logic to guess) the file format.

Code Block
$ td connector:guess seed.yml -o load.yml

If you open load.yml, you’ll see the guessed file format definitions including file formats, encodings, column names, and types. This example is trying to load csv files.

Code Block
in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  user: <USER>
  secret_key_file:
    content: |
      -----BEGIN RSA PRIVATE KEY-----
      Proc-Type: 4,ENCRYPTED
      DEK-Info: AES-128-CBC...
      ...
      -----END RSA PRIVATE KEY-----
  secret_key_passphrase: <PASSPHRASE>
  user_directory_is_root: true
  timeout: 600
  path_prefix: /path/to/sample
  parser:
    skip_header_lines: 1
    charset: UTF-8
    newline: CRLF
    type: csv
    delimiter: ','
    quote: '"'
    columns:
    - {name: id, type: long}
    - {name: account, type: long}
    - {name: time, type: timestamp, format: "%Y-%m-%d %H:%M:%S"}
    - {name: purchase, type: timestamp, format: "%Y%m%d"}
    - {name: comment, type: string}
    - {name: json_column, type: json}
out:
  mode: append
  exec: {}

Then, you can preview how the system will parse the file by using the preview command.

Code Block
$ td connector:preview load.yml
+-------+---------+----------+---------------------+
| id    | company | customer | created_at          |
+-------+---------+----------+---------------------+
| 11200 | AA Inc. |    David | 2015-03-31 06:12:37 |
| 20313 | BB Imc. |      Tom | 2015-04-01 01:00:07 |
| 32132 | CC Inc. | Fernando | 2015-04-01 10:33:41 |
| 40133 | DD Inc. |    Cesar | 2015-04-02 05:12:32 |
| 93133 | EE Inc. |     Jake | 2015-04-02 14:11:13 |
+-------+---------+----------+---------------------+

The guess command needs over 3 rows and 2 columns in source data file, because it guesses column definition using sample rows from source data.

If the system detects your column name or column type unexpectedly, modify load.yml directly and preview again.

Currently, the Data Connector supports parsing of “boolean”, “long”, “double”, “string”, and “timestamp” types.

You also must create a local database and table prior to executing the data load job. Follow these steps:

Code Block
$ td database:create td_sample_db
$ td table:create td_sample_db td_sample_table

Execute Load Job

Submit the load job. It may take a couple of hours depending on the size of the data. Specify the Treasure Data database and table where the data should be stored.

It’s also recommended to specify --time-column option, because Treasure Data’s storage is partitioned by time (see data partitioning) If the option is not provided, the data connector chooses the first long or timestamp column as the partitioning time. The type of the column specified by --time-column must be either of long and timestamp type.

If your data doesn’t have a time column you can add a time column by using add_time filter option. For more details see add_time filter plugin

Code Block
$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at

The connector:issue command assumes that you have already created a database(td_sample_db)and a table(td_sample_table). If the database or the table do not exist in TD, the connector:issue command fails, so create the database and table manually or use --auto-create-table option with td connector:issue command to auto create the database and table:

Code Block
$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at --auto-create-table

The Data Connector does not sort records on server-side. To use time-based partitioning effectively, sort records in files beforehand.

If you have a field called `time`, you don't have to specify the `--time-column` option.

Code Block
$ td connector:issue load.yml --database td_sample_db --table td_sample_table

Scheduled execution

You can schedule periodic Data Connector execution for incremental SFTP file import. We configure our scheduler carefully to ensure high availability. By using this feature, you no longer need a crondaemon on your local data center.

For the scheduled import, the Data Connector for SFTP imports all files that match with the specified prefix (e.g. path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz) at first and remembers the last path (path/to/sample_201505.csv.gz) for the next execution.

On the second and on subsequent runs, it imports only files that comes after the last path in alphabetical (lexicographic) order. (path/to/sample_201506.csv.gz, …)

Create the schedule

A new schedule can be created using the td connector:create command. The following are required: the name of the schedule, the cron-style schedule, the database and table where the data will be stored, and the Data Connector configuration file.

Code Block
$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml

It’s also recommended to specify the --time-column option, because Treasure Data’s storage is partitioned by time.

Code Block
$ td connector:create \
    daily_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \
    load.yml \
    --time-column created_at

The `cron` parameter also accepts three special options: `@hourly`, `@daily` and `@monthly`.

By default, schedule is setup in UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The `--timezone` option supports only extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles' etc. Timezone abbreviations like PST, CST are *not* supported and may lead to unexpected schedules.

List the Schedules

You can see the list of currently scheduled entries by running the command td connector:list.

Code Block
$ td connector:list
+--------------+--------------+----------+-------+--------------+-----------------+--------------------------------------------+
| Name         | Cron         | Timezone | Delay | Database     | Table           | Config                                     |
+--------------+--------------+----------+-------+--------------+-----------------+--------------------------------------------+
| daily_import | 10 0 * * *   | UTC      | 0     | td_sample_db | td_sample_table | {"in"=>{"type"=>"sftp", "access_key_id"... |
+--------------+--------------+----------+-------+--------------+-----------------+--------------------------------------------+

Show the Setting and Schedule History

td connector:show shows the execution setting of a schedule entry.

Code Block
% td connector:show daily_import
Name     : daily_import
Cron     : 10 0 * * *
Timezone : UTC
Delay    : 0
Database : td_sample_db
Table    : td_sample_table
Config
---
in:
  type: sftp
  host: <HOST>
  port: <PORT, default is 22>
  auth_method: password
  user: <USER>
  password: <PASSWORD>
  path_prefix: /sftp/file/path/prefix
  parser:
    charset: UTF-8
    ...

td connector:history shows the execution history of a schedule entry. To investigate the results of each individual run, use td job <jobid>.

Code Block
% td connector:history daily_import
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
| JobID  | Status  | Records | Database     | Table           | Priority | Started                   | Duration |
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
| 578066 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-18 00:10:05 +0000 | 160      |
| 577968 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-17 00:10:07 +0000 | 161      |
| 577914 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-16 00:10:03 +0000 | 152      |
| 577872 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-15 00:10:04 +0000 | 163      |
| 577810 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-14 00:10:04 +0000 | 164      |
| 577766 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-13 00:10:04 +0000 | 155      |
| 577710 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-12 00:10:05 +0000 | 156      |
| 577610 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-11 00:10:04 +0000 | 157      |
+--------+---------+---------+--------------+-----------------+----------+---------------------------+----------+
8 rows in set

Delete the Schedule

td connector:delete will remove the schedule.

Code Block
$ td connector:delete daily_import

Modes for out plugin

You can specify file import mode in out section of seed.yml.

append (default)

This is the default mode and records are appended to the target table.

Code Block
in:
  ...
out:
  mode: append

replace (In td 0.11.10 and later)

This mode replaces data in the target table. Note that any manual schema changes made to the target table will remain intact with this mode.

Code Block
in:
  ...
out:
  mode: replace

FAQ for the SFTP Data Connector

I can’t connect to my SFTP server, what can I do?

  • Check what is valid protocol. If you intend to SFTP, you can use this Data Connector for SFTP. If FTP/FTPS, try connect with FTP Data Connector.

    • If you’re using firewall, check your accepted IP range/port. Server administrators sometimes change the default port number from TCP/22 for security reasons.

    • Be sure that your private key has OpenSSH format. We don’t support other formats like “PuTTY”.

How do I troubleshoot data import problems?

Review the job log. Warning and errors provide information about the success of your import. For example, you can identify the source file names associated with import errors.After creating the authenticated connection, you are automatically taken to Authentications.

  1. Search for the connection you created. 

  2. Select New Source.


     

Connection

  1. Type a name for your Source in the Data Transfer field.

    Image Added



  2. Click Next

Source Table

  1. The Source dialog opens.

    Image Added



  2. Edit the following parameters 

Parameters

Description

User directory root


Path prefix

Prefix of target files (string, required)

Path match pattern

Type a regular expression to query file paths. If a file path doesn’t match with the specified pattern, the file is skipped. For example, if you specify the pattern  .csv$ # , then a file is skipped if its path doesn’t match the pattern.

Incremental

 Enables incremental loading (boolean, optional. default: true. If incremental loading is enabled, the config diff for the next execution will include last_path parameter so that the next execution skips files before the path. Otherwise, last_path is not included.

Start after path

Only paths lexicographically greater than this will be imported.

Data Settings

  1. Select Next.
    The Data Settings page opens.

  2. Optionally, edit the data settings or skip this page of the dialog.

    Image Added



Filters


Excerpt Include
PD:Applying Import Integration Filters
PD:Applying Import Integration Filters
nopaneltrue

Preview


Excerpt Include
Data Preview
Data Preview
nopaneltrue


Data Placement

Excerpt Include
Data Placement
Data Placement
nopaneltrue