Learn more about Repro Export Integration.
You can use Repro Import Integration to ingest files from your Amazon S3 buckets with customized parameters for an easy configuration.
Prerequisites
Basic knowledge of Treasure Data, including the TD Toolbelt.
A Repro application id, Access key ID, and Secret access key.
Limitations
If you enter File name patterns and select Incremental?, the data does not load. The data does not load because Repro does not put the data into the old folder, it creates a new folder every time.
Using the TD Console to Create Your Connection
Create a New Connection
When you create a data connection, you must provide authentication to access the integration. In Treasure Data, configure the authentication and then specify the source information.
Open TD Console.
Navigate to Integrations Hub > Catalog.
Search for and select Repro.
The following dialog opens:Enter the required information:
Region. The region of your Repro's application (for example, ap-northeast-1, us-east-1 …)
Authentication Method. Select basic.
Access key ID. Enter the key you obtained from Repro.
Secret access key. Enter the secret access key you obtained from Repro.
Select Continue.
Enter a name for your connection.
Select Done.
Transfer Your Repro Account Data to Treasure Data
After creating the authenticated connection, you are automatically taken to Authentications.
Search for the connection you created.
Select New Source.
Create Your Source
Type a name for your Source in the Data Transfer field.
Select Next.
Edit the following parameters in your source table.
Parameters | Description |
Bucket | The bucket where your Repro application is located, for example, |
App ID | Your Repro application id. |
Upload Time | The specific time you would like to ingest the data ( |
Filename pattern | Use regexp to match file paths. If a file path doesn’t match the specified pattern, the file is skipped. For example, if you specify the pattern .csv$ # , then a file is skipped if its path doesn’t match the pattern. Read more about regular expressions. |
Filter by Modified Time | Select if you would like to use modified time as the main criteria to load data. |
Insert these parameters so that the first executions skip files that were modified before that specified timestamp. For example, 2019-06-03T10:30:19.806Z. | |
Incremental by Modified Time (Available if Filter by Modified Time is selected) | Select to ingest only new data since the previous ingestion. |
Incremental? (Available if Filter by Modified Time is selected) | Select to ingest only new data since the previous ingestion. |
Configure Your Data
Select Next.
The Data Settings page opens.Optionally, edit the data settings or skip this page.
Preview Your Data
You can see a preview of your data before running the import by selecting Generate Preview. Data shown in the data preview is approximated from your source. It is not the actual data that is imported. Click Next. To preview your data, select Generate Preview. Optionally, click Next. Verify that the data looks approximately like you expect it to. Select Next.
Data preview is optional and you can safely skip to the next page of the dialog if you want.
Data Placement
For data placement, select the target database and table where you want your data placed and indicate how often the import should run. Select Next. Under Storage you will create a new or select an existing database and create a new or select an existing table for where you want to place the imported data. Select a Database > Select an existing or Create New Database. Optionally, type a database name. Select a Table> Select an existing or Create New Table. Optionally, type a table name. Choose the method for importing the data. Append (default)-Data import results are appended to the table. Always Replace-Replaces the entire content of an existing table with the result output of the query. If the table does not exist, a new table is created. Replace on New Data-Only replace the entire content of an existing table with the result output when there is new data. Select the Timestamp-based Partition Key column. Select the Timezone for your data storage. Under Schedule, you can choose when and how often you want to run this query. Select Off. Select Scheduling Timezone. Select Create & Run Now. Repeat the query: Select On. Select the Schedule. The UI provides these four options: @hourly, @daily and @monthly or custom cron. You can also select Delay Transfer and add a delay of execution time. Select Scheduling Timezone. Select Create & Run Now. After your transfer has run, you can see the results of your transfer in Data Workbench > Databases.
If the table does not exist, it will be created.
If you want to set a different partition key seed than the default key, you can specify the long or timestamp column as the partitioning time. As a default time column, it uses upload_time with the add_time filter.