Skip to content
Last updated

Delta Sharing Import Integration

This feature is in BETA version. For more information, contact your Customer Success Representative.

Delta sharing helps you organize and integrate customer data within your company and transform it into optimal data for marketing purposes. With this integration, you can import Delta sharing data into Treasure Data.

Prerequisites

  • Basic knowledge of Treasure Data
  • Basic knowledge of Delta Sharing server

Requirements and Limitations

  • You must have a user ID on the Delta Sharing server with sufficient permissions to access information on the server.
  • You must have an API endpoint for the Delta Sharing metastores.
  • You must have a bearer token to authenticate the API calls to the Delta Sharing server.

Static IP Address of Treasure Data

The static IP address of Treasure Data is the access point and source of the linkage for this Integration. To determine the static IP address, contact your Customer Success representative or technical support.

Obtain Endpoint and Authentication Token from Databricks

The integration can read data in Databricks by using Delta Sharing open sharing with recipients.

Here are key endpoints:

Import from Delta Sharing Server via TD Console

Create Authentication

Your first step is to create a new authentication with a set of credentials.

  1. Select Integrations Hub.
  2. Select Catalog.

3. Search for Delta Sharing in the Catalog; hover your mouse over the icon and select Create Authentication.

4. Ensure that the Credentials tab is selected and then enter credential information for the integration.

New Authentication Fields

ParameterDescription
EndpointYour API endpoint to the delta-sharing server
Bearer TokenYour token to access the delta-sharing server API
  1. Select Continue.
  2. Enter a name for your authentication and select Done.

Create a Source

  1. Open TD Console.
  2. Navigate to Integrations Hub > Authentications.
  3. Locate your new authentication and select New Source.

The Create Source page displays with the Source Table tab selected.

Create a Connection

  1. Enter a source name in the Data Transfer Name field.
  2. Enter the name of the authentication to use for the data transfer.
  3. Select Next.

The Create Source page displays with the Source Table tab selected.

Identify a Source Table

  1. Edit the parameters

ParameterRequiredDescription
ShareYesShare name
SchemaYesSchema name
TableYesTable name
Default TimezoneYesThe default timezone for the timestamp format
  1. Select Next.

The Create Source page displays with the Data Settings tab selected.

Specify Data Settings

  1. Edit the parameters.

ParameterRequiredDescription
Retry LimitNoThe retry count for
Initial Retry Interval In MillisNoInitial retry interval in milliseconds
Max Retry Wait In MillisNoMax retry wait in milliseconds
HTTP Connect Timeout In MillisNoThe HTTP timeout in milliseconds
HTTP Read Timeout In MillisNoThe HTTP read timeout in milliseconds
HTTP Write Timeout In MillisNoThe HTTP write timeout in milliseconds
  1. Select Next.

The Create Source page displays with the Data Preview tab selected.

Data Preview

You can see a preview of your data before running the import by selecting Generate Preview. Data preview is optional and you can safely skip to the next page of the dialog if you choose to.

  1. Select Next. The Data Preview page opens.
  2. If you want to preview your data, select Generate Preview.
  3. Verify the data.

Data Placement

For data placement, select the target database and table where you want your data placed and indicate how often the import should run.

  1. Select Next. Under Storage, you will create a new or select an existing database and create a new or select an existing table for where you want to place the imported data.

  2. Select a Database > Select an existing or Create New Database.

  3. Optionally, type a database name.

  4. Select a TableSelect an existing or Create New Table.

  5. Optionally, type a table name.

  6. Choose the method for importing the data.

    • Append (default)-Data import results are appended to the table. If the table does not exist, it will be created.
    • Always Replace-Replaces the entire content of an existing table with the result output of the query. If the table does not exist, a new table is created.
    • Replace on New Data-Only replace the entire content of an existing table with the result output when there is new data.
  7. Select the Timestamp-based Partition Key column. If you want to set a different partition key seed than the default key, you can specify the long or timestamp column as the partitioning time. As a default time column, it uses upload_time with the add_time filter.

  8. Select the Timezone for your data storage.

  9. Under Schedule, you can choose when and how often you want to run this query.

Run once

  1. Select Off.
  2. Select Scheduling Timezone.
  3. Select Create & Run Now.

Repeat Regularly

  1. Select On.
  2. Select the Schedule. The UI provides these four options: @hourly@daily and @monthly or custom cron.
  3. You can also select Delay Transfer and add a delay of execution time.
  4. Select Scheduling Timezone.
  5. Select Create & Run Now.

After your transfer has run, you can see the results of your transfer in Data Workbench > Databases.

Import from Delta Sharing Server via Workflow

You can import data from the Delta Sharing server by using the  td_load>: operator of workflow. If you have already created a source, you can run it; if you don't want to create a source, you can import it using a .yml file.

Running a Source

  1. Identify your source.
  2. To obtain a unique ID, open the Source list and then filter by the Delta Sharing.
  3. Open the menu and select Copy Unique ID.

4. Define a workflow task using the td_load>: operator.

+load:
   td_load>: unique_id_of_your_source
   database: ${td.dest_db}
   table: ${td.dest_table}
  1. Run the workflow.

Parameters Reference

NameDescriptionValueDefault ValueRequired
endpointDelta Sharing server endpointtrue
tokenBearer token to authenticate API interactiontrue
shareThe share name to connecttrue
schemaThe schema name to connecttrue
tableThe table name to connecttrue
default_timezoneThe default timezone for timestamp formatUTCtrue
retry_limitThe maximum retry times6false
retry_initial_wait_msecsThe retry initial wait in milliseconds30000false
max_retry_wait_msecsThe maximum retry wait in milliseconds120000false
connection_timeout_msecsThe HTTP connection timeout in milliseconds1800000false
write_timeout_msecsThe HTTP read connection timeout in milliseconds1800000false
read_timeout_msecsThe HTTP write connection timeout in milliseconds1800000false

Sample Workflow Code

Visit Treasure Boxes for sample workflow code.

Import from the Delta Sharing server via CLI (Toolbelt)

Before setting up the integration, install the most current TD Toolbelt.

Create a Seed Configuration File (seed.yml)

in:
  type: delta_sharing
  endpoint: http://endpoint.com
  token: ***
  schema: schema
  table: table
  share: share
  retry_limit: 7
  retry_initial_wait_msecs: 30000
  max_retry_wait_msecs: 120000
  connection_timeout_msecs: 1800000
  write_timeout_msecs: 1800000
  read_timeout_msecs: 1800000
out:
  mode: append

Parameters Reference

NameDescriptionValueDefault ValueRequired
endpointDelta sharing endpointtrue
tokenBearer token to use to autenticate API interactiontrue
shareThe share name to connecttrue
schemaThe schema name to connecttrue
tableThe table name to connecttrue
default_timezoneThe default timezone for timestamp format Support both short and full zone IDUTCtrue
retry_limitThe maximum retry times6false
retry_initial_wait_msecsThe retry initial wait in milliseconds30000false
max_retry_wait_msecsThe maximum retry wait in milliseconds120000false
connection_timeout_msecsThe HTTP connection timeout in milliseconds1800000false
write_timeout_msecsThe HTTP read connection timeout in milliseconds1800000false
read_timeout_msecsThe HTTP write connection timeout in milliseconds1800000false

The Delta Sharing integration imports all files that match the specified prefix.

Example

path_prefix: path/to/sample_ –> path/to/sample_201501.csv.gz, path/to/sample_201502.csv.gz, …, path/to/sample_201505.csv.gz

Generate load.yml

Use connector:guess. This command automatically reads the source files and uses logic to guess the file format and its fields and columns.

$ td connector:guess seed.yml -o load.yml

You can open the load.yml to review the file format definitions including file formats, encodings, column names, and types.

Example

in:
  type: delta_sharing
  endpoint: http://endpoint.com
  token: ***
  schema: schema
  table: table
  share: share
  retry_limit: 7
  retry_initial_wait_msecs: 30000
  max_retry_wait_msecs: 120000
  connection_timeout_msecs: 1800000
  write_timeout_msecs: 1800000
  read_timeout_msecs: 1800000
out:
  mode: append

To preview the data, use the td connector:preview command.

$ td connector:preview load.yml
+-------+---------+----------+---------------------+
| id | company | customer | created_at |
+-------+---------+----------+---------------------+
| 11200 | AA Inc. | David | 2015-03-31 06:12:37 |
| 20313 | BB Imc. |Tom | 2015-04-01 01:00:07 |
| 32132 | CC Inc. | Fernando | 2015-04-01 10:33:41 |
| 40133 | DD Inc. | Cesar | 2015-04-02 05:12:32 |
| 93133 | EE Inc. |  Jake | 2015-04-02 14:11:13 |
+-------+---------+----------+---------------------+

The guess command requires more than 3 rows and 2 columns in the source data file because the command accesses the column definition using sample rows from the source data.

If the system detects your column name or column type unexpectedly, modify the load.yml file and preview again.

Execute Load Job

It might take a couple of hours ro run the job depending on the size of the data. Be sure to specify the Treasure Data database and table where the data should be stored.

Treasure Data also recommends specifying the --time-column option because Treasure Data’s storage is partitioned by time (see data partitioning). If this option is not provided, the data integration chooses the firstlong or timestamp column as the partitioning time. The type of the column specified by --time-column must be either of type long or type timestamp.

If your data doesn’t have a time column, you can add a time column by using the add_time filter option. For more details see add_time filter plugin.

$ td connector:issue load.yml --database td_sample_db --table td_sample_table \
--time-column created_at

The connector:issue command assumes that you have already created a  database (*td_sample_db)*and a table (td_sample_table). If the database or the table does not exist in TD, this command fails. Create the database and table manually or use the --auto-create-table option with the td connector:issue command to auto-create the database and table.

$ td connector:issue load.yml --database td_sample_db --table td_sample_table
 --time-column created_at --auto-create-table

The data integration does not sort records on the server side. To use time-based partitioning effectively, sort records in files beforehand.

If you have a field called time, you don’t have to specify the --time-column option.

$ td connector:issue load.yml --database td_sample_db --table td_sample_table

Import Modes

You can specify file import mode in the out: section of the load.yml file. The out: section controls how data is imported into a Treasure Data table. For example, you may choose to append data or replace data in an existing table in Treasure Data.

ModeDescriptionExamples
AppendRecords are appended to the target table.in:   ... out:   mode: append
AlwaysReplaceReplaces data in the target table.Any manual schema changes made to the target table remain intact.in:   ... out:   mode: replace
Replace on new dataReplaces data in the target table only when there is new data to import.in:   ... out:   mode: replace_on_new_data

Scheduling Executions

You can schedule periodic data integration execution.

Treasure Data configures the scheduler carefully to ensure high availability.

Create a Schedule Using the TD Toolbelt

A new schedule can be created using the td connector:create command.

$ td connector:create daily_import "10 0 * * *" \
 td_sample_db td_sample_table load.yml

Treasure Data also recommends that you specify the --time-column option, because Treasure Data’s storage is partitioned by time (see also data partitioning).

$ td connector:create daily_import "10 0 * * *" \
 td_sample_db td_sample_table load.yml \
 --time-column created_at

The cron parameter also accepts three special options: @hourly, @daily, and @monthly.

By default, the schedule is set up in the UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The --timezone option supports only extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles', etc. Timezone abbreviations like PST, CST are not supported and might lead to unexpected schedules.