The Data Connector for PostgreSQL enables you to directly import data from your PostgreSQL database to Treasure Data. For PostgreSQL data export, see PostgreSQL Export Integration.
- Basic knowledge of Treasure Data
- Basic knowledge of PostgreSQL
- A PostgreSQL instance running remotely, for example, on RDS.
If your security policy requires IP whitelisting, you must add Treasure Data's IP addresses to your allowlist to ensure a successful connection.
Please find the complete list of static IP addresses, organized by region, at the following link:
https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/
When you configure a data connection, you provide authentication to access the integration. In Treasure Data, you configure the authentication and then specify the source information.
- Open TD Console.
- Navigate to Integrations Hub > Catalog
- Search for and select PostgreSQL. Select Create.
4. The following dialog opens.

5. Enter the required credentials and set the parameters. Select Continue.
| Parameter | Description |
|---|---|
| Host | The host information of the source database, such as an IP address. |
| Port | The connection port on the source instance. The PostgreSQL default is 5432. |
| User | The username to connect to the source database. |
| Password | The password to connect to the source database. |
| Use SSL | Check this box to connect using SSL. |
| Specify SSL version | Select the SSL version to use for the connection. |
| Socket connection timeout | Timeout (in seconds) for socket connection (default is 300). |
| Network timeout | Timeout (in seconds) for network socket operations. 0 means no timeout. |
- Type a name for your connection.
- Select Done.
After creating the authenticated connection, you are automatically taken to Authentications.
- Search for the connection you created.
- Select New Source. The Create Source dialog opens.
- Type a name for your Source in the Data Transfer field**.**

- Click Next.
- Edit the following parameters

| Parameters | Description |
|---|---|
| Driver version | Select PostgreSQL JDBC Driver |
| Database name | The name of the database from which you are transferring data. For example, your_database_name. |
| Use custom SELECT query? | Use if you need more than a simple SELECT (columns) FROM table WHERE (condition). |
| SELECT columns | If there are only specific columns you would like to pull data from, list them here. Otherwise, all columns are transferred. |
| Table | The table from which you want to import the data. |
| WHERE condition | If you need additional specificity on the data retrieved from the table, specify it as part of the WHERE clause. |
| ORDER BY | Specify if you need the records ordered by a particular field. |
- Select Next. The Data Settings page opens.
- Optionally, edit the data settings or skip this page of the dialog.

| Parameters | Description |
|---|---|
| Incremental: | When you want to run this transfer repeatedly, select this checkbox to import data only since the last time the import was run. |
| Rows per batch | Extremely large datasets can lead to memory issues and, subsequently, failed jobs. Use this flag to break down the import job into batches by the number of rows to reduce the chances of memory issues and failed jobs. |
| Default timezone | The timezone to be used when doing the import. |
| After SELECT | This SQL is executed after the SELECT query in the same transaction. |
| Column Options | Select this option to modify the type of column before importing it. Select Save to save any data setting you have entered. |
| Default Column Options | Select this option to define the data type according to default SQL types before importing it. Select Save to save any data settings you have entered. This option is not available in the TD Console. Set this option using TD CLI or TD Workflow. |
You can see a preview of your data before running the import by selecting Generate Preview. Data preview is optional and you can safely skip to the next page of the dialog if you choose to.
- Select Next. The Data Preview page opens.
- If you want to preview your data, select Generate Preview.
- Verify the data.
For data placement, select the target database and table where you want your data placed and indicate how often the import should run.
Select Next. Under Storage, you will create a new or select an existing database and create a new or select an existing table for where you want to place the imported data.
Select a Database > Select an existing or Create New Database.
Optionally, type a database name.
Select a Table> Select an existing or Create New Table.
Optionally, type a table name.
Choose the method for importing the data.
- Append (default)-Data import results are appended to the table. If the table does not exist, it will be created.
- Always Replace-Replaces the entire content of an existing table with the result output of the query. If the table does not exist, a new table is created.
- Replace on New Data-Only replace the entire content of an existing table with the result output when there is new data.
Select the Timestamp-based Partition Key column. If you want to set a different partition key seed than the default key, you can specify the long or timestamp column as the partitioning time. As a default time column, it uses upload_time with the add_time filter.
Select the Timezone for your data storage.
Under Schedule, you can choose when and how often you want to run this query.
- Select Off.
- Select Scheduling Timezone.
- Select Create & Run Now.
- Select On.
- Select the Schedule. The UI provides these four options: @hourly, @daily and @monthly or custom cron.
- You can also select Delay Transfer and add a delay of execution time.
- Select Scheduling Timezone.
- Select Create & Run Now.
After your transfer has run, you can see the results of your transfer in Data Workbench > Databases.
Install the newest Treasure Data Toolbelt.
Configure seed.yml with your PostgreSQL access information:
in: type: postgresql host: postgresql_host_name port: 5432 ssl: true ssl_version: TLS user: test_user password: test_password driver_version: 42.7.x database: test_database table: test_table select: "*" default_column_options: TIMESTAMP: {type: string, timestamp_format: '%Y-%m-%d', timezone: '+0900'} BIGINT: {type: string} mode: replaceThis example imports all records inside the table. You can have more detailed control with additional parameters.
For more details on available out modes or on available ssl_version, see the Appendix.
Use td connector:guess. This command automatically reads the target data and intelligently guesses the data format.
$ td connector:guess seed.yml -o load.ymlIf you open load.yml, you will see guessed file format definitions, including, in some cases, file formats, encodings, column names, and types.
You can preview data to be imported using the command td connector:preview.
td connector:preview load.ymlSubmit the load job. The job may take a couple of hours to run, depending on the data size. You must specify the database and table where your data is stored.
It is recommended to specify --the time-column option because Treasure Data’s storage is partitioned by time (see also data partitioning). If the option is not given, the data connector chooses the first long or timestamp column as the partitioning time. The type of the column specified by --time-column must be either long or timestamp type.
If your data doesn’t have a time column, you may add it using the add_time filter option. For more details, see the add_time filter plugin.
$ td connector:issue load.yml --database td_sample_db --table td_sample_tableThe connector:issue command assumes that you have already created a database(td_sample_db) and a table(td_sample_table). If the database or the table do not exist in TD, the connector:issue command fails. In this case, create the database and table manually or use --auto-create-tableoption with td connector:issuecommand to auto create the database and table:
$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column created_at --auto-create-tableYou can assign the Time Format column to the "Partitioning Key" by the "--time-column" option.
For sample workflows showing how to import data from your PostgreSQL, view Treasure Boxes.
You can load records incrementally by specifying columns from your table to the incremental_columns parameter. Optionally, you may specify some initial values to the last_record parameter.
in:
type: postgresql
host: postgresql_host_name
port: 5432
user: test_user
password: test_password
database: test_database
table: test_table
incremental: true
incremental_columns: [id, created_at]
last_record: [10000, '2014-02-16T13:01:06.000000Z']
out:
mode: append
exec: {}To optimally use the incremental\_columns: option, create an index on the relevant columns to avoid full table scans. For this example, the following index should be created:
CREATE INDEX embulk_incremental_loading_index ON test_table (id, created_at);The connector automatically creates the query and sort value.
-- when last_record wasn't given
SELECT * FROM(
...original query is here
)
ORDER BY id, created_at-- when last_record was given
SELECT * FROM(
...original query is here
)
WHERE id > 10000 OR (id = 10000 AND created_at > '2014-02-16T13:01:06.000000Z')
ORDER BY id, created_atThe connector automatically generates last_record and uses it at the next scheduled execution.
in:
type: postgresql
...
out:
...
Config Diff
---
in:
last_record:
- 20000
- '2015-06-16T16:32:14.000000Z'The query option is unavailable when you set incremental: true. Only strings, integers, timestamp, and timestamptz(timestamp with timezone) are supported as incremental_columns.
PostgreSQL’s array type is retrieved as string type.
PostgreSQL’s hstore type is retrieved as string type when the data connector reads it at first. Therefore, if you want to use hstore type as json type, you need to specify config_options and explicitly convert the type to json type.
For example, v_hstore is hstore type in PostgreSQL:
in:
type: postgresql
host: xxx
...
table: my_tbl
select: "*"
column_options:
v_hstore: {type: json} # explicit type conversion: string type to json type
out:
...You can use specific SSL version that your PostgreSQL server is using with ssl_version option.
in:
type: postgresql
...
ssl: true
ssl_version: TLSv1.1Supported values are as follows.
- TLS
- TLSv1.1
- TLSv1.2
- TLSv1.3
You can use a specific default format of the SQL data type. In the example below, TIMESTAMP is converted to string with the format %Y-%m-%d and with the time zone +0900 The SQL type should be in upper case such as TIMESTAMP, BIGINT
in:
type: postgresql
...
default_column_options:
TIMESTAMP: {type: string, timestamp_format: '%Y-%m-%d', timezone: '+0900'}
BIGINT: {type: string}