Using this SFTP Server Export Integration, you can send workflow job results directly to your SFTP server.
- Basic knowledge of Treasure Data, including the toolbelt
- An SFTP server
- Path prefix cannot contain any of the following characters: * ?
- The connection timeout setting doesn't function properly in v.1 so the export job could run for a long time instead of timing out.
If your security policy requires IP whitelisting, you must add Treasure Data's IP addresses to your allowlist to ensure a successful connection.
Please find the complete list of static IP addresses, organized by region, at the following link:
https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/
The following figure illustrates a typical usage scenario that involves exporting data from Treasure Data Agents into a SFTP or similar server.

Obtain your credential to access your SFTP server.
1. Navigate to Data Workbench > Queries.
2. Select New Query.
3. Run the query to validate the result set.

- Select Export Results.

- You can select an existing or new integration as authentication for the external service to be used for output. Choose one of the following:
Use Existing Integration

Create a New Integration

Enter the required details and credentials for your remote SFTP instance.
- Host: The host information of the remote SFTP instance, for example, an IP address.
- Port: The connection port on the remote SFTP instance. The default port number is 22.
- User: The user name used to connect to the remote FTP instance.
- Authentication mode: The way you choose to authenticate with your SFTP server.
- Secret key file: Required if public / private key pair is selected as Authentication Mode. (Note that the key type Ed25519 is not supported but the DSA/RSA/ECDSA key types are supported.)
- Passphrase for secret key file: (Optional) If required, provide a passphrase for the provided secret file.
- Retry limit: Number of times to retry a failed connection (default 10).
- Timeout: Connection timeout in seconds (default 600).
Specify information for Export to SFTP

| Field | Description |
|---|---|
| Is user directory Root? | If selected, the user directory is treated as the root directory. (ex. /home/treasure-data as /) |
| Path prefix | The file path where the file will be stored |
| Rename file after upload finish | If selected, SFTP result output renames the file on the remote SFTP server from .xxx.tmp to .xxx after all the data is transferred. Some MA tools try to import data when a file with a specific name exists on the SFTP server. The temp name option is useful for such cases. |
| Format | The format of the exported files:
|
| Encoders |
|
Public Key Required when Encoder is PGP encryption | The public key is used to encrypt the file before being uploaded |
Key Identifier Required when Encoder is PGP encryption | Specifies the Key ID of the encryption subkey used to secure the file. The master key is excluded from the encryption process. |
Armor Optional | Whether to use ASCII armor or not |
Compression Type | Defines the compression algorithm used to compress the file, which will be compressed before encryption for uploading to the SFTP server. Please ensure that you compress your file before encrypting and uploading. When you decrypt it, the file will return to a compressed format such as .gz or .bz2. |
| Header line? | The header line includes the column name as the first line |
| Delimiter | The delimiter character:
|
| Quote policy | The policy for a quote:
|
| Null string | How null value of the result of the query displays:
|
| End-of-line character | The EOL (end-of-line) character:
|
Temp filesize threshold | The maximum file size (in bytes) of a local temp file. When the temp file reaches the threshold, the file flushes to a remote file. If you encounter the error |
You can use Scheduled Jobs with Result Export to periodically write the output result to a target destination that you specify.
Treasure Data's scheduler feature supports periodic query execution to achieve high availability.
When two specifications provide conflicting schedule specifications, the specification requesting to execute more often is followed while the other schedule specification is ignored.
For example, if the cron schedule is '0 0 1 * 1', then the 'day of month' specification and 'day of week' are discordant because the former specification requires it to run every first day of each month at midnight (00:00), while the latter specification requires it to run every Monday at midnight (00:00). The latter specification is followed.
Navigate to Data Workbench > Queries
Create a new query or select an existing query.
Next to Schedule, select None.

In the drop-down, select one of the following schedule options:

Drop-down Value Description Custom cron... Review Custom cron... details. @daily (midnight) Run once a day at midnight (00:00 am) in the specified time zone. @hourly (:00) Run every hour at 00 minutes. None No schedule.

| Cron Value | Description |
|---|---|
0 * * * * | Run once an hour. |
0 0 * * * | Run once a day at midnight. |
0 0 1 * * | Run once a month at midnight on the morning of the first day of the month. |
| "" | Create a job that has no scheduled run time. |
* * * * *
- - - - -
| | | | |
| | | | +----- day of week (0 - 6) (Sunday=0)
| | | +---------- month (1 - 12)
| | +--------------- day of month (1 - 31)
| +-------------------- hour (0 - 23)
+------------------------- min (0 - 59)The following named entries can be used:
- Day of Week: sun, mon, tue, wed, thu, fri, sat.
- Month: jan, feb, mar, apr, may, jun, jul, aug, sep, oct, nov, dec.
A single space is required between each field. The values for each field can be composed of:
| Field Value | Example | Example Description |
|---|---|---|
| A single value, within the limits displayed above for each field. | ||
A wildcard '*' to indicate no restriction based on the field. | '0 0 1 * *' | Configures the schedule to run at midnight (00:00) on the first day of each month. |
A range '2-5', indicating the range of accepted values for the field. | '0 0 1-10 * *' | Configures the schedule to run at midnight (00:00) on the first 10 days of each month. |
A list of comma-separated values '2,3,4,5', indicating the list of accepted values for the field. | 0 0 1,11,21 * *' | Configures the schedule to run at midnight (00:00) every 1st, 11th, and 21st day of each month. |
A periodicity indicator '*/5' to express how often based on the field's valid range of values a schedule is allowed to run. | '30 */2 1 * *' | Configures the schedule to run on the 1st of every month, every 2 hours starting at 00:30. '0 0 */5 * *' configures the schedule to run at midnight (00:00) every 5 days starting on the 5th of each month. |
A comma-separated list of any of the above except the '*' wildcard is also supported '2,*/5,8-10'. | '0 0 5,*/10,25 * *' | Configures the schedule to run at midnight (00:00) every 5th, 10th, 20th, and 25th day of each month. |
- (Optional) You can delay the start time of a query by enabling the Delay execution.
Save the query with a name and run, or just run the query. Upon successful completion of the query, the query result is automatically exported to the specified destination.
Scheduled jobs that continuously fail due to configuration errors may be disabled on the system side after several notifications.
(Optional) You can delay the start time of a query by enabling the Delay execution.
You can also send segment data to the target platform by creating an activation in the Audience Studio.
- Navigate to Audience Studio.
- Select a parent segment.
- Open the target segment, right-mouse click, and then select Create Activation.
- In the Details panel, enter an Activation name and configure the activation according to the previous section on Configuration Parameters.
- Customize the activation output in the Output Mapping panel.

- Attribute Columns
- Select Export All Columns to export all columns without making any changes.
- Select + Add Columns to add specific columns for the export. The Output Column Name pre-populates with the same Source column name. You can update the Output Column Name. Continue to select + Add Columnsto add new columns for your activation output.
- String Builder
- + Add string to create strings for export. Select from the following values:
- String: Choose any value; use text to create a custom value.
- Timestamp: The date and time of the export.
- Segment Id: The segment ID number.
- Segment Name: The segment name.
- Audience Id: The parent segment number.
- + Add string to create strings for export. Select from the following values:
- Set a Schedule.

- Select the values to define your schedule and optionally include email notifications.
- Select Create.
If you need to create an activation for a batch journey, review Creating a Batch Journey Activation.
You can also use CLI for Result Export to SFTP.
The following example has the escaped newline with a backslash in the key.
$ td query --result '{"type":"sftp","host":"xx.xx.xx.xx","port":22,"username":"xxxx","secret_key_file":"{\"content\":\"-----BEGIN RSA PRIVATE KEY-----\nABCDEFJ\nABCDEFJ\n-----END RSA PRIVATE KEY-----\"}","secret_key_passphrase":"xxxxxx", "user_directory_is_root":true,"path_prefix":"/path/to/file.csv","rename_file_after_upload":false,"header_line":true,"quote_policy":"MINIMAL","delimiter":",","null_string":"","newline":"CRLF"}' -d sample_datasets "select * from www_access" -T prestoExample,
$ td query \
--result '{"type":"sftp","host":"xx.xx.xx.xx","port":22,"auth_method":"Password","username":"xxxx","password":"xxxxx","user_directory_is_root":true,"path_prefix":"/path/to/file.csv","rename_file_after_upload":false,"header_line":true,"quote_policy":"MINIMAL","delimiter":",","null_string":"","newline":"CRLF"}' \
-d sample_datasets "select * from www_access" -T prestoWithin Treasure Workflow, you can specify the use of this integration to export data. For sample workflows, view Treasure Boxes.
- The Embulk-encoder-Encryption document: https://docs.treasuredata.com/articles/#!project-integrations/~embulk-encoder-encryption-pgp
Q: I cannot connect to my SFTP server. What can I do?
A: Here are some things to check:
- Verify that your protocol is valid. If you intended to use SFTP, you can use this integration. If you intended to use FTP/FTPS, try to connect with the FTP Server Import Integration.
- If you are using a firewall, check your accepted IP range and port. Server administrators sometimes change the default port number from TCP 22 for security reasons.
- Be sure that your private key has an OpenSSH format. Treasure Data does not support other formats like “PuTTY”.
- Treasure Data does not support the default format of the private key since OpenSSH 7.8. Re-generate the key using '-m PEM' option.
Q: I cannot find my uploaded file on the SFTP server. What happened?
A: If your SFTP server does not support Append mode, the integration may not be able to update the file with the error "SFTP Exception: no such file." For stability reasons, the integration uploads the file content in batches. Consequently, you will need to enable the Append mode on your SFTP server. If Append mode cannot be enabled on your SFTP server, you will need to set temp_file_threshold to 0 to bypass this feature.
Note: Please ensure that you compress your file before encrypting and uploading.
When you decrypt using non-built-in encryption, the file will return to a compressed format such as .gz or .bz2.**
When you decrypt using built-in encryption, the file will return to raw data.