# Google Cloud Storage Export Integration You can write job results directly to your Google Cloud Storage. For the Import Integration, see [Google Cloud Storage Import Integration](/int/google-cloud-storage-import-integration). ## Prerequisites - Basic knowledge of Treasure Data, including [TD Toolbelt](https://toolbelt.treasuredata.com/). - A Google Cloud Platform account with specific permissions ## Static IP Address of Treasure Data Integration If your security policy requires IP whitelisting, you must add Treasure Data's IP addresses to your allowlist to ensure a successful connection. Please find the complete list of static IP addresses, organized by region, at the following link: [https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/](https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/) ## Obtain the Required Google Cloud Platform Credentials To use this feature, you need the following: - Google Project ID - JSON Credential - Storage Object Creator role is required to create an Object in the GCS bucket. - Storage Object Viewer is required to list Objects in the GCS bucket. ### Obtain the Destination Bucket in Google Cloud Storage List the Cloud Storage buckets. They are ordered in the list lexicographically by name. To list the buckets in a project: 1. Open the Cloud Storage browser in the Google Cloud Console. 2. Optionally, use filtering to narrow the results in your list. Buckets that are part of the currently selected project, appear in the browser list. ### Optionally Create the Destination Bucket in Google Cloud Storage To create a new storage bucket: 1. Open the Cloud Storage browser in the Google Cloud Console. 2. Select **Create bucket** to open the bucket creation form. ![](/assets/image2021-3-30_12-5-39.f85b2072cd53edff38ce47c1486e908014d84b076a06266368cabfd1f17f9752.d7746214.png) 1. Enter your bucket information and select **Continue** to complete each step: - Specify a **Name**, subject to the bucket name requirements. - Select a **Location type** and **Location** where the bucket data will be permanently stored. - Select a **Default storage class** for the bucket. The default storage class is assigned by default to all objects uploaded to the bucket. The **Monthly cost estimate** panel in the right pane estimates the bucket's monthly costs based on your selected storage class and location, as well as your expected data size and operations. - Select an **Access control** model to determine how you control access to the bucket's objects. - Optionally, you can add bucket labels, set a retention policy, and choose an encryption method. 2. Select **Create**. ### Obtain the Google JSON Credentials The integration with Google Cloud Storage is based on server-to-server API authentication. The Service Account used to generate the JSON Credentials must have Storage Object Creator permission and Storage Object Viewer permissions for the destination bucket. 1. Visit your Google Developer Console. 2. Select Credentials under APIs & auth at the left menu. 3. Select Service account: ![](/assets/image-20191107-183125.bff0af4ff0b2d698ddac8f767fff865e734555f5d3d52c97ddd40f2e132142b2.d7746214.png) 4. Select the JSON-based key type that is Google’s recommended configuration. The key is automatically downloaded by the browser. ![](/assets/image-20191107-183435.4040ec4b0f373fc49ce9988f50924b047dee9b7774609f68f26894f5fc6fcc8a.d7746214.png) ## Use the TD Console to Create Your Connection ### Create a New Authentication In Treasure Data, you must create and configure the data connection before running your query. As part of the data connection, you provide authentication to access the integration. 1. Open **TD Console**. 2. Navigate to **Integrations Hub** > **Catalog**. 3. Search for and select Google Cloud Storage. ![](/assets/image2021-3-30_12-15-11.ab3a79a166262d667599558e54d20a5c7fb976dc8c51d698c1fc8a5415c2b8d8.d7746214.png) 4. Select **Create Authentication**. 5. Type the credentials to authenticate. ![](/assets/image2021-3-30_12-21-43.cfd52597fdb9c849b4dcd1b589fd0d2ca3a0787495ba62c69ef078c3b011a037.d7746214.png) 6. Type a name for your connection. 7. Select **Continue**. ## Define your Query 1. Complete the instructions in [Creating a Destination Integration](https://docs.treasuredata.com/display/PD/Creating+a+Destination+Integration). 2. Navigate to **Data Workbench > Queries**. 3. Select a query for which you would like to export data. 4. Run the query to validate the result set. 5. Select**Export Results**. 6. Select an existing integration authentication.  ![](/assets/google-cloud-storage-export-integration-2024-08-13.1c74570eefcf79f633e3ab3bec247ede3394d9cc9a5255a108c797b55dfeed2e.d7746214.png) 7. Define any additional Export Results details. In your export integration content, review the integration parameters. For example, your Export Results screen might be different, or you might not have additional details to fill out. ![](/assets/google-cloud-storage-export-integration-2024-08-13-1.8d763bbfd78f9cf1d1cc3e7e2916ad78d532bce9da38669ab0b28904e28ae233.d7746214.png) 8. Select **Done**. 9. Run your query. 10. Validate that your data moved to the destination you specified. ### Integration Parameters for Google Cloud Storage | Parameter | Values | Description | | --- | --- | --- | | `bucket` | | Destination Google Cloud Storage bucket name (string, required). | | `path_prefix` | | Object path prefix, including the filename (string, required). Example: `/path/to/filename.csv`. | | `content_type` | | MIME type of the output file (string, optional). Default: `application/octet-stream`. | | `format` | `csv`, `tsv` | Output file format (string, required). | | `compression` | `none`, `gz`, `bzip2`, `zip_builtin`, `zlib_builtin`, `bzip2_builtin` | Compression applied to the exported file (string, optional). Default: `none`. | | `header_line` | `true`, `false` | Write the header line with column names as the first line (boolean, optional). Default: `true`. | | `delimiter` | `,`, `\t`, ` | `, single-byte character | | `null_string` | | Substitution string for NULL values (string, optional). Default: empty string for CSV, `\N` for TSV. | | `end_of_line_character` | `CRLF`, `LF`, `CR` | Line termination character (string, optional). Default: `CRLF`. | ### Example Query ```sql SELECT c0 AS EMAIL FROM e_1000 WHERE c0 != 'email' ``` ### Validating Export Results Upon successful completion of the query, the results are automatically imported to the specified Google Cloud Storage destination: ![](/assets/image-20191107-183557.20e88d749df90fa6376351d0cdbe4e628d255ab5d27d0a0d06ed8b0c7c529b71.d7746214.png) ## Activate a Segment in Audience Studio You can also send segment data to the target platform by creating an activation in the Audience Studio. 1. Navigate to **Audience Studio**. 2. Select a parent segment. 3. Open the target segment, right-mouse click, and then select **Create Activation.** 4. In the **Details** panel, enter an Activation name and configure the activation according to the previous section on Configuration Parameters. 5. Customize the activation output in the **Output Mapping** panel. ![](/assets/ouput.b2c7f1d909c4f98ed10f5300df858a4b19f71a3b0834df952f5fb24018a5ea78.8ebdf569.png) - Attribute Columns - Select **Export All Columns** to export all columns without making any changes. - Select **+ Add Columns** to add specific columns for the export. The Output Column Name pre-populates with the same Source column name. You can update the Output Column Name. Continue to select **+ Add Columns**to add new columns for your activation output. - String Builder - **+ Add string** to create strings for export. Select from the following values: - String: Choose any value; use text to create a custom value. - Timestamp: The date and time of the export. - Segment Id: The segment ID number. - Segment Name: The segment name. - Audience Id: The parent segment number. 1. Set a **Schedule**. ![](/assets/snippet-output-connector-on-audience-studio-2024-08-28.a99525173709da1eb537f839019fa7876ffae95045154c8f2941b030022f792c.8ebdf569.png) - Select the values to define your schedule and optionally include email notifications. 1. Select **Create**. If you need to create an activation for a batch journey, review [Creating a Batch Journey Activation](/products/customer-data-platform/journey-orchestration/batch/creating-a-batch-journey-activation). ## Exporting Data from Google Cloud Storage CLI The following command allows you to set a scheduled query that sends query results to Google Cloud Storage. - Specify your JSON keys in the following sample syntax. - Use backslash to break a line without breaking the code syntax. ```json '{"type":"gcs","bucket":"samplebucket","path_prefix":"/output/test.csv","format":"csv","compression":"","header_line":false,"delimiter":",","null_string":"","newline":"CRLF",  "json_keyfile":"{\"private_key_id\": \"ABCDEFGHIJ\", \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nABCDEFGHIJ\\ABCDEFGHIJ\\n-----END PRIVATE KEY-----\\n\", \"client_email\": \"ABCDEFGHIJ@developer.gserviceaccount.com\", \"client_id\": \"ABCDEFGHIJ.apps.googleusercontent.com\", \"type\": \"service_account\"}"}' ``` For example, ```bash $ td sched:create scheduled_gcs "10 6 * * *" \ -d dataconnector_db "SELECT id,account,purchase,comment,time FROM data_connectors" \ -r '{"type":"gcs","bucket":"samplebucket","path_prefix":"/output/test.csv","format":"csv","compression":"","header_line":false,"delimiter":",","null_string":"","newline":"CRLF", "json_keyfile":"{\"private_key_id\": \"ABCDEFGHIJ\", \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nABCDEFGHIJ\\ABCDEFGHIJ\\n-----END PRIVATE KEY-----\\n\", \"client_email\": \"ABCDEFGHIJ@developer.gserviceaccount.com\", \"client_id\": \"ABCDEFGHIJ.apps.googleusercontent.com\", \"type\": \"service_account\"}"}' ``` **Options** | Option | Values | | --- | --- | | `format` | `csv` or `tsv` | | `compression` | `""` or `gz` | | `null_string` | `""` or `\N` | | `newline` | `CRLF`, `CR`, or `LF` | | `json_keyfile` | Escape newline `\n` with a backslash | ## Other configurations - The Result Export can be [scheduled](/products/customer-data-platform/data-workbench/queries/scheduling-a-query) to upload data to a target destination periodically. - All import and export integrations can be added to a [TD Workflow](/products/customer-data-platform/data-workbench/workflows). The **td** data operator can be used to export a query result to a specified connector. For more information, see [Reference for Treasure Data Operators](/products/customer-data-platform/data-workbench/workflows/reference-for-treasure-data-operators). ## References [The Embulk-encoder-Encryption document](/int/embulk-encoder-encryption-pgp) ## FAQ for the GCS Data Connector Note: Please ensure that you compress your file before encrypting and uploading. 1. When you decrypt using non-built-in encrypti on, the file will return to a compressed format such as .gz or .bz2. 2. When you decrypt using built-in encrypti on, the file will return to raw data.