Learn more about Google Cloud Storage Export Integration.
The Data Connector for Google Cloud Storage enables import of the contents of .tsv and .csv files stored in your GCS bucket.
For sample workflows importing data from GCS, view the Treasure Boxes.
Basic knowledge of Treasure Data
An existing Google Service Account
You also need to generate and obtain a JSON key file from Google Developers Console. See Generating a service account credential.
When you configure a data connection, you provide authentication to access the integration. In Treasure Data, you configure the authentication and then specify the source information.
Open TD Console.
Navigate to Integrations Hub > Catalog
Search and select Google Cloud Storage.
The following dialog opens.
Create a New Google Cloud Storage Connector
Set the following parameters:
Select a JSON keyfile. This method uses the JSON keyfile generated from the Google Developers Console.
Copy and paste the contents of the JSON keyfile generated from the Google Developers Console in this field
Treasure Data GCS Output is the default value. As this is an arbitrary client name associated with API requests, you can leave the default value (Treasure Data GCS Output).
Type a name for your connection.
After creating the authenticated connection, you are automatically taken to Authentications.
Search for the connection you created.
Select New Source.
Type a name for your Source in the Data Transfer field.
The Source Table dialog opens. Edit the following parameters
Google Cloud Storage bucket name (Ex. your_bucket_name)
Prefix of target keys. (Ex. logs/data_)
regexp to match file paths. If a file path doesn’t match with this pattern, the file is skipped. (Ex. .csv$ # in this case, a file is skipped if its path doesn’t match with this pattern)
Start after path
Inserts last_path parameter so that the first execution skips files before the path. (Ex. logs/data_20170101.csv)
Enables incremental loading. If incremental loading is enabled, config diff for the next execution will include last_path parameter so that next execution skips files before the path. Otherwise, last_path will not be included.
Amazon CloudFront is a web service that speeds up the distribution of your static and dynamic web content. You can configure CloudFront to create log files that contain detailed information about every user request that CloudFront receives. If you enable logging, you can save CloudFront logfiles, shown as follows:
[your_bucket] - [logging] - [E231A697YXWD39.2017-04-23-15.a103fd5a.gz] [your_bucket] - [logging] - [E231A697YXWD39.2017-04-23-15.b2aede4a.gz] [your_bucket] - [logging] - [E231A697YXWD39.2017-04-23-16.594fa8e6.gz] [your_bucket] - [logging] - [E231A697YXWD39.2017-04-23-16.d12f42f9.gz]
In this case, the Source Table setting should be as shown:
Path Prefix: logging/
Path Regex: .gz$ (Not Required)
Start after path: logging/E231A697YXWD39.2017-04-23-15.b2aede4a.gz (Assuming that you want to import the logfiles from 2017-04-23-16.)
Incremental: true (if you want to schedule this job.)
The Data Settings page opens.
Optionally, edit the data settings or skip this page of the dialog.
Parses a value as a specified type. And then, it stores after converting to Treasure Data schema.
Changes time zone of timestamp columns if the value itself doesn’t include time zone.
Total file count limit
Maximum number of files to read. (optional)
You can name the columns and set the data type.