# Elastic Cloud Export Integration

The Elastic Cloud Export Integration sends job results directly to your Elastic Cloud instance using this connector.

## Supported Authentication

- This export supports “basic authentication” including “Security” (formally “Shield”) of Elastic Cloud.
- The query result doesn’t support LDAP and Active Directory that are provided by “Security”.
- Elastic Cloud result output supports TCP/9200 as a default. Elastic Cloud provides a different port for every user.
- This export supports “Security” for Elastic Cloud with Elastic Cloud Result Output.
- LDAP or other authentication methods are not supported.


## Prerequisites

- Basic knowledge of Treasure Data, including the TD [Toolbelt](https://toolbelt.treasuredata.com/).
- Data imported into Treasure Data, that you want to export into Elastic Cloud.
- Working knowledge of SQL, Hive, or Presto.
- A working Elastic Cloud instance. Recommended version 2.0 or greater.
- Your own Elastic Cloud instance running in your environment.
- Also, a knowledge of the following Elastic Cloud hierarchy is helpful:


| **Term** | **Description** | **Description of Value to Specify** |
|  --- | --- | --- |
| Cluster | A collection of one or more servers (nodes) that collectively hold and provide search and indexing functionality for your entire dataset. |  |
| Node | A single server that is part of (or all of) your cluster. | - comma-separated list of nodes |
| Index | This is analogous to a database. An index is a collection of documents with somewhat similar characteristics. | - the name of the index |
| Type | This is analogous to a table. One or more types is defined within an index. A type is a logical category or partition of your index. | - the name of the type |
| ID | A column containing each name for each row/record. In Elastic Cloud result export, this setting is optional. | - (optional) the name of the ID column |


For more information, go to the [Elastic Cloud documentation](https://www.elastic.co/guide/en/cloud/current/index.html).

## Define the Data Export from Treasure Data

1. Complete the instructions in [Creating a Destination Integration](https://docs.treasuredata.com/smart/project-product-documentation/creating-a-destination-integration).
2. Navigate to **Data Workbench > Queries**.
3. Select a query for which you would like to export data.
4. Run the query to validate the result set.
5. Select **Export Results To**.
6. Use the selection dialog to select your destination connection. For example:


![](/assets/image-20200722-233716.9eb19ca81e1411ea40982e0882b2ae65c4564722ff9f4a8492ea6c5c24760712.c474a7bb.png)
7. Define any additional Export Results details. For example:

![](/assets/image-20200722-233831.d32721727b031ba80b689f9be621bde8a5641dee00dae3eb48dfb1f90c693536.c474a7bb.png)
8. Select **Done**.
9. Run your query.
10. Validate that your data moved to the destination you specified.

For example, open your Google sheet file and validate that is populated with data.

When you execute your query, the Treasure Data query result is imported into Elastic Cloud.

## Validate Your Export Data within the Elastic Cloud Instance

You can sanity-check the data on your elastic search index with a simple query. Assuming the IP and port on your Elastic Cloud instance is ``` example.com:9200` ``, the following command can dump all your data to a file:


```bash
$ curl -XGET -i 'http://example.com:9200/*/_search' \
--user username:password > dump.txt
```

The result is a JSON file with the column names, column types, and content according to the data you’ve previously exported there. An example of what an Elastic Cloud query might output is as follows:


```
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 2283

{"took":4,"timed_out":false,"_shards":{"total":15,"successful":15,"failed":0},"hits":{"total":100024,"max_score":1.0,"hits":[{"_index":"embulk_20160205-141457","_type":"embulk_type","_id":"AVKxyShGu46fqokIoDTf","_score":1...
```

## Tune Timeout Exceptions

Increasing `Bulk actions` and `Bulk size` helps increase the records in every insert request and reduces the HTTP requests. If you don’t get good results, consider upgrading your instance specs.

## Activate a Segment in Audience Studio

You can also send segment data to the target platform by creating an activation in the Audience Studio.

1. Navigate to **Audience Studio**.
2. Select a parent segment.
3. Open the target segment, right-mouse click, and then select **Create Activation.**
4. In the **Details** panel, enter an Activation name and configure the activation according to the previous section on Configuration Parameters.
5. Customize the activation output in the **Output Mapping** panel.


![](/assets/ouput.b2c7f1d909c4f98ed10f5300df858a4b19f71a3b0834df952f5fb24018a5ea78.8ebdf569.png)

- Attribute Columns
  - Select **Export All Columns** to export all columns without making any changes.
  - Select **+ Add Columns** to add specific columns for the export. The Output Column Name pre-populates with the same Source column name. You can update the Output Column Name. Continue to select **+ Add Columns**to add new columns for your activation output.
- String Builder
  - **+ Add string** to create strings for export. Select from the following values:
    - String: Choose any value; use text to create a custom value.
    - Timestamp: The date and time of the export.
    - Segment Id: The segment ID number.
    - Segment Name: The segment name.
    - Audience Id: The parent segment number.


1. Set a **Schedule**.


![](/assets/snippet-output-connector-on-audience-studio-2024-08-28.a99525173709da1eb537f839019fa7876ffae95045154c8f2941b030022f792c.8ebdf569.png)

- Select the values to define your schedule and optionally include email notifications.


1. Select **Create**.


If you need to create an activation for a batch journey, review [Creating a Batch Journey Activation](/products/customer-data-platform/journey-orchestration/batch/creating-a-batch-journey-activation).