# Adobe Analytics Import Integration V2 Adobe Analytics Import Integration v2 enables you to directly ingest data feeds generated by Adobe Analytics into Treasure data through a native connector. It offers enhanced flexibility with cloud storage services that Adobe Analytics users utilize. Adobe Analytics enables organizations to gather data and gain actionable insights from any digital customer interaction. For details, see [Adobe experience league docs](https://experienceleague.adobe.com/docs/analytics/analyze/admin-overview/analytics-overview.html?lang=en). ## Prerequisites - Basic knowledge of Treasure Data, including the [TD Toolbelt](https://toolbelt.treasuredata.com/). - Basic Knowledge of Adobe Analytics ## Static IP Address of Treasure Data Integration If your security policy requires IP whitelisting, you must add Treasure Data's IP addresses to your allowlist to ensure a successful connection. Please find the complete list of static IP addresses, organized by region, at the following link: [https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/](https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/) ## Import from Adobe Analytics using TD Console ### **Create Authentication** The connector supports cloud storage services as data source. To access the data source, you need to configure authentication. 1. Select **Integrations Hub**. 2. Select **Catalog**. ![](/assets/26617500.ca92fa4ab9277dca95973c6bd413fc662a3f0d04b57d58f7a8c952a29f28bbec.6bd3f7a7.png) 3. Search for your Integration in the Catalog by name "*Adobe Analytics V2*" or filter by category "*Web/Mobile analytics services*" and/or "*Business Intelligence*". 4. Hover your mouse over the icon and select **Create Authentication**. 5. Select the **Storage Type** corresponding to the cloud service receiving data feeds from Adobe Analytics. ![](/assets/newauthen.7b7c1e61ee9667f04ff346388445158bac0eba1545174e72ee7fd4c115567473.4ab63049.png) 6. Enter your **Bucket** information**.** 7. Enter your **Endpoint**. Alternatively, select the **Region**if you want to use default endpoint of that region. 8. Select **Authentication Method**. Various methods are supported depending on the selected **Storage Type**. For instance, in the case of Amazon S3, the connector supports: 1. *Basic* 2. *Session Token* 3. *Assume Role* 9. Enter the requisite credentials as per the chosen **Authentication Method**. Storage Type: Presently, only Amazon S3 is available for selection. Support for other services is planned for future implementation and is outlined in the roadmap. ### **Bucket directory set-up** The file loader does not support files stored directly under bucket root. It is recommended to store data feeds of different report suites under separated directories to have better performance and recognition by the file loader. It is recommended to use normalized and alpha numeric characters as directory name to better recognition by the file loader. ![](/assets/3194cb61-1a72-4bd8-9b2d-750ab9a37e47.d5d7c75697781d3d44bd729c9f5415f7882a143a80ba0715a868764ddb4d4315.4ab63049.png) ### Import configuration - Hit files The connector supports the ingestion of both hit files and look up files within a data feed. The file loader search for data feed files using Report Suite ID and Path Prefix. In cases where more than one data feed found, the oldest feed would be picked up, only one feed per job run. 1. **Target** is type of data to import. 1. Data Feed: import hit files. 2. Look Up Data: import look up files. 2. Select **Data Feed Data** as **Target** . 3. Enter **Path Prefix** to the desired directory in the bucket. 4. Enter **Report Suite ID**. 5. Select **Incremental**. This option enables to import job to search only for files that uploaded to the bucket after the previous run timestamp. 6. **Modified After**. This options to filter out old data fee It is recommended to use Incremental plus proper job schedules so that every next job run would pick up the right data feed when it delivered by Adobe Analytics. ### Import Configuration - Look up Files Look-up files can be imported into the database to facilitate subsequent queries and analysis. 1. In Source Table, set **Target** to **Look Up Data**. ![](/assets/lookup.84f7be4eece0b3e812009d4fe7548a60315c947cca1f3eb5acb92f25cd45cc25.4ab63049.png) ``` lookup_type (filename without extenstion); key (1st column) ; value (2nd column) ; source (datafeed name) browser ; 1 ; chrome ; treasuredata_20240101-120000 browser ; 2 ; IE ; treasuredata_20240101-120000 country ; 1 ; Afghanistan ; treasuredata_20240101-120000 country ; 2 ; Albania ; treasuredata_2024-01-01 country ; 3 ; Algeria ; treasuredata_2024-01-01 resolution ; 1 ; 320 x 200 ; ... resolution ; 2 ; 640 x 240 ; ... ......... ``` ### Advance Configuration Please follow the following steps to configure advance settings, data placement and schedule of the job. The connector supports retry when the initial request fails. The retry only take place when hit file is found but look up files is not. Next retry time is determined following the Exponential Backoff rule. User can configure: #### Configure Data Settings 1. **Max Retry:** is the maximum number of retries. Default is 7. 2. **Initial Retry Wait:** initial wait time in seconds. Default is 2. 3. **Max Retry Wait:** maxium allowed waiting time before next retry. ![](/assets/retry.dbefabaa9af2c57698deeb05e0e9b6a8ec982ea0b2d9ebd52d48be85418c4d2a.4ab63049.png) #### Configure Data Preview Since the hit data is click raw data, the connector only show dummy values in this screen. ![](/assets/data_preview.6e7e25f2329228a11ab296c7f88cd667eb24d8381095d563b0c8022b4a9810af.4ab63049.png) #### Configure Data Placement User can configure which database and table the data would be imported into. ![](/assets/data-placement.729c6b7749c366871970dcaf6995ca3f79230d64e22473dfbdfe86db8e83b856.4ab63049.png) 1. **Database** **and Table:** select destination or create a new one. 2. Enter **Database**information**.** 3. Enter **Table**information. 4. Select storing**Method**. 1. *Append* 2. *Always replace* 3. *Replace on new data.* 5. Select **Timestamp-based Partition Key.** Schedule: 1. Configure**Repeat** 1. *On*: Configures schedule. 2. *Off* 2. Configure **Schedule Timezone.**Select timezone reference for the scheduling timestamp.