# Amazon Marketing Cloud Export Integration ## Introduction Video script script style wistia-player[media-id='c2k9ygcvyh']:not(:defined) { background: center / contain no-repeat url('https://fast.wistia.com/embed/medias/c2k9ygcvyh/swatch'); display: block; filter: blur(5px); padding-top:48.44%; } ## Overview Integrating Amazon Marketing Cloud (AMC) and Treasure Data CDP allows advertisers to use Amazon Ads to pass strategic CDP segments from Treasure Data in a pseudonymized form to AMC. Advertisers can then combine the CDP inputs with Amazon Ads signals and obtain unique insights on topics such as media impact, audience segmentation, segment overlap, and customer journeys in a privacy-safe manner. Learn more about how integrating Treasure Data CDP with AMC [can drive better Ad campaigns](https://aws.amazon.com/blogs/industries/integrating-cdp-with-amazon-marketing-cloud-to-drive-better-ad-campaigns/). This Amazon Marketing Cloud Export Integration lets you write job results from Treasure Data and upload pseudonymized audience datasets directly to Amazon Marketing Cloud. Personally identifiable information (PII) fields are programmatically normalized and hashed using SHA-256. If PII fields are already hashed, they are transferred as is.Hashing Identifier Type: EMAIL, FIRST_NAME, LAST_NAME, PHONE, ADDRESS, and CITY. ## What can you do with this Integration? - Upload pseudonymized audiences dataset to Amazon Marketing Cloud. - Delete identities in all existing datasets. - Create a rule-based audience and activate it directly in Amazon DSP. Amazon Marketing Cloud only accepts hashed or pseudonymized information. All information in an advertiser's AMC instance is handled strictly with Amazon's privacy policies, and an advertiser's signals cannot be exported or accessed by Amazon. Advertisers can only access aggregated and anonymous outputs from AMC. ## Prerequisites - Basic knowledge of Treasure Data, including [TD Toolbelt](https://toolbelt.treasuredata.com/). - Amazon Marketing Cloud account. - S3 bucket that grant access to an Amazon Marketing Cloud account. - Amazon DSP account that was invited into the AMC instance. - (Optional) Create the dataset definition before using the Amazon Marketing Cloud Export Integration. Visit [Treasure Boxes](https://github.com/treasure-data/treasure-boxes/tree/e5d13703022cb6a3f608f9bd0d9ccba07f93229f/scenarios/create_amc_dataset) to see an example of dataset creation workflow. This integration supports creating a dataset the first time the activation runs. ## Requirements and Limitations - Query columns must be specified with the exact column names (case insensitive) and data type. - It is recommended not to use the existing S3 bucket tied to the AMC instance. Create a new S3 bucket for data upload. - An Amazon DSP minimum audience size is 2,000 identities. ## Static IP Address of Treasure Data Integration If your security policy requires IP whitelisting, you must add Treasure Data's IP addresses to your allowlist to ensure a successful connection. Please find the complete list of static IP addresses, organized by region, at the following link: [https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/](https://api-docs.treasuredata.com/en/overview/ip-addresses-integrations-result-workers/) ## Get Amazon Marketing Cloud info Login into [https://advertising.amazon.com/marketing-cloud](https://advertising.amazon.com/marketing-cloud). ## Obtain Amazon Marketing Cloud Instance ID and Account ID After logging into an Amazon Marketing Cloud instance, perform the following steps to obtain the Amazon Marketing Cloud **Instance ID**, **Account ID**,and **Data upload AWS account ID** information. 1. Obtain the Amazon Marketing Cloud **Instance ID** from the Instance list. ![](/assets/amc-instance-id.abc2ce38604fb5981160a05d8e500b97dc615cb846db07a6a80964b8c22c0c9f.56be814e.png) 2. View the Amazon Marketing Cloud **Account ID** assigned to parameter **entityId**. ![](/assets/amc-account-id.4f957be8a7083286d9c2efaaae3f84769bc09229d808004a7d75f691d1040462.56be814e.png) 1. You can get the **Data upload AWS account ID** from the Instance Info page. ![](/assets/image2023-5-25_15-59-20.880a33b3c6ecb4eb8922f04975f7514151b85d487df4c53d6feb482cc0a8756a.56be814e.png) ## Prepare Dataset as Upload Target During the first activation, you can create data sets on the fly. Subsequent runs will reuse the dataset without manual intervention. This requires the dataset definition to be configured. The following are examples of dataset definitions. If your target dataset exists, you can skip this section. **Example of a simple Dimension dataset without any hashed PII columns** In this dataset, there are two columns: product_asin  and product_sku. Note that if the schema from your query doesn't match this, the upload will result in an error. ```json { "dataSet": {"columns":[{"name": "product_asin","columnType": "DIMENSION","dataType": "STRING"},{"name": "product_sku","columnType": "METRIC","dataType": "STRING"}],"dataSetId": "mydemosimledimensionds","description": "my demo dimension dataset"}} ``` **Example of a fact dataset with email as hashed PII and TCF as consent type** In this example, there are three columns: email (as hashedPII), record_date (isMainEventTime = True indicates this is a fact dataset), and tcf_string (is tcf string of consent). ```json {"dataSet": {"dataSetId": "mydemofactdswithidentity","columns": [{"columnType": "DIMENSION","dataType": "STRING","externalUserIdType": {"hashedPii": "EMAIL"},"name": "email"},{"columnType": "DIMENSION","dataType": "DATE","isMainEventTime": true,"name": "record_date"},{"columnType": "DIMENSION","dataType": "STRING","name": "tcf_string","consentType": "TCF"}],"countryCode": "US"}} ``` Check out the External Reference section for more detailed Amazon guidelines. Contact our technical support team if you need any assistance. ## Obtain Dataset ID and Dataset Fields Name If you create the dataset using this integration, the Dataset ID specified will be re-used in the next activation runs; there is no requirement to obtain it from the Amazon Query editor page.duration. 1. From the Amazon Query editor page, you can obtain the **Dataset ID** and **Dataset Fields Name**. ![](/assets/image2023-5-13_10-34-51.caee09a878f3fe4aabb0b5fa3ada834586a21ad3f3761f878a928e2b234a716a.56be814e.png) 2. Select the **info** icon to obtain each filename's data type. ![](/assets/image2023-5-13_11-39-11.342805db7ae0f5c91090897bdf58fdf1f4e4d4e481a58442f5677e10f3122189.56be814e.png) ## Config S3 Bucket to allow access from Data upload AWS account ID 1. Log in to S3, navigate to your bucket > permission tab, and select Edit beneath the **Bucket policy**. ![](/assets/image2023-5-13_10-36-24.5387d4dea8fa1103eed4589b47476ed14b7dadfb8d6396b60761d08617336dfa.56be814e.png) 2. Copy and paste this configuration after replacing your **Data upload AWS account ID** and **Bucket** **name** and select **Save**. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::{{Data upload AWS account ID}}:root" }, "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl", "s3:GetObjectTagging", "s3:GetBucketTagging" ], "Resource": [ "arn:aws:s3:::{{bucket name}}/*", "arn:aws:s3:::{{bucket name}}" ] } ] } ``` 1. Define tags for your bucket To define a tag for your instance, perform the following steps: 1. Navigate to your Amazon S3 console and click on the bucket name you want to associate tags. 2. Click **Properties** and scroll to the **Tags** section. Click **Edit**. 3. Click **Add tag** to define a key for the tag and provide a value for the key. For our purpose, define the key a" "instance"d" and type the identifier of the instance you use to upload data as the value for the key. 4. Click **Save**. Visit https://advertising.amazon.com/API/docs/en-us/guides/amazon-marketing-cloud/advertiser-data-upload/advertiser-data-s3-bucket for more details. ## Use the TD Console to Create a Connection Before running your query, you must create and configure the data connection in Treasure Data. As part of the data connection, you provide authentication to access the integration. ### Create a New Authentication 1. Open **TD Console**. 2. Navigate to **Integrations Hub** > **Catalog**. 3. Search for Amazon Marketing Cloud and select Amazon Marketing Cloud. ![](/assets/image2023-5-13_10-37-51.4ad46ad5c1f770f3fa643704b97b33f5b48c4d3fca39fdbc0340123510f58f4b.56be814e.png) 1. Select the **Click here** link in New Authentication to connect to a new Amazon Account. ![](/assets/screen-shot-2023-04-24-at-17.29.15.277937262c2647f54bf8d538db9104329274257337aac7c7a1625c23045d81f8.56be814e.png) 1. You will be redirected to the Amazon Marketing Cloud instance, where you can log in using OAuth. Provide the username and password. ![](/assets/screen-shot-2023-04-24-at-17.33.04.59530de12fc559cc1e7aa5527f50209c153c9d47a35e9cc1a7021ed01b78437e.56be814e.png) 1. Select **Allow** to accept consent screen, which redirects to the TD console. ![](/assets/screen-shot-2023-04-24-at-17.35.48.7ff015ad82656050a4ccd0b4269a763308726ae72b89bf22f3d5d90ef19cf431.56be814e.png) 1. Fill in the required credentials fields. ![](/assets/screenshot-2023-05-25-at-20.17.00.33f32ce1d7ed5fe7440059c27ac544a060c9c00770934f907553fc5a37de1da5.56be814e.png) 1. Select **Continue**. 2. Type a name for your authentication. 3. Select **Done.** The following table describes the parameters for configuring the Amazon Marketing Cloud Export Integration. | **Parameter** | **Description** | | --- | --- | | **AMC Instance Id** | Amazon Marketing Cloud Instance ID | | **AMC Account Id** | Amazon Marketing Cloud Account ID. If you leave it blank, then the first account of the instance ID will be used. | | **S3 Endpoint** | S3 service endpoint override. You can find region and endpoint information from the [AWS service endpoints](http://docs.aws.amazon.com/general/latest/gr/rande.md#s3_region) document. (For example, [*s3.ap-northeast-1.amazonaws.com*](https://s3.ap-northeast-1.amazonaws.com/)). When specified, it overrides the region setting. | | **S3 Region** | AWS Region | | **S3 Authentication Method** | Choose from the following authentication methods:- **basic**: Uses *access_key_id* and *secret_access_key* to authenticate. See [AWS Programmatic access](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.md). - Access Key ID - Secret access key - **session (Recommended)**: Uses temporary-generated *access_key_id*, *secret_access_key* and *session_token*. - Access Key ID - Secret access key - Session token - **assume_role**: Uses role access. [See AWS AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md). TD's Instance Profile. - Account ID - Your Role Name - External ID - Duration In Seconds - **anonymous**: Not Supported | | **Access Key ID** | AWS S3 issued Access Key ID | | **Secret Access Key** | AWS S3 issued Secret Access Key | | **S3 Session token** | Your temporary AWS Session Token | | **TD's Instance Profile** | The TD Console provides this value. The numeric portion of the value constitutes the Account ID you use to create your IAM role. | | **Account ID** | Your AWS Account ID | | **Your Role Name** | Your AWS Role Name | | **External ID** | Your Secret External ID | | **Duration In Seconds** | Duration For The Temporary Credentials | ### Define your Query 1. Navigate to **Data Workbench** > **Queries**. 2. Select **New Query.** 3. Run the query to validate the result set. ![](/assets/image2021-9-7_15-1-38.a0ef34a3cfb6035eb6a53a758a755062decbb29fac046d1b24440a76d0662c7a.56be814e.png) ### Specify the Result Export Target 1. Select **Export Results** ![](/assets/image2021-9-7_15-10-56.ee7ed43caab64adefafcc22595462fd8068c974c4f47b5959a7babd7d99972b8.56be814e.png) 1. Select an existing integration authentication. ![](/assets/image2020-12-18_13-44-6.09e8af43184e33e337bef7c546600eaaa5be9f010b690af1d591c7c2b4bb2df3.56be814e.png) 2. Define any additional Export Results details. In your export integration content, review the integration parameters. ### Upload or Delete Operation 1. Select **Upload** or **Delete** for Dataset Operation. Check o" "Create new dataset if it does not exist "t" to create a new dataset on the fly. Provide the dataset definition described in the section. "Prepare Dataset as upload target." ### Create Rule-based Audiences ![](/assets/image2023-10-2_15-58-16.9c8652d39d72fd74e5c6f373eb7be55c22902e167be9e11ef76cb2889543bc94.56be814e.png) The following table describes the configuration parameters for the Amazon Marketing Cloud export integration. | Parameter | Required | Description | | --- | --- | --- | | Target | yes | Support 2 target types:- AMC dataset - Rule-based Audiences | | **For "AMC Data" Type** | | | | API version | yes, the default is Latest | AMC version. Values include Latest only | | Dataset Operation | yes, if the target is the AMC dataset | Dataset operation: **Upload**: Upload data into dataset **Delete**: Delete identities from all existing data set | | Dataset Identifier | yes, if the target is the AMC dataset and the dataset operation is upload | The dataset ID that data will be uploaded to | | Dataset Definition | required when the option "Create dataset if not exist" is true | Define the target dataset in json format. | | Update Strategy | yes, if the API version is the latest | Values include ADDITIVE, FULL REPLACE, OVERLAP REPLACE, OVERLAP KEEP Visit https://advertising.amazon.com/API/docs/en-us/guides/amazon-marketing-cloud/advertiser-data-upload/advertiser-data-upload for more details. | | Country Code | | The source country of your uploaded data is ISO_3166-1_alpha-2 format. Visit https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Officially_assigned_code_elements for more details. | | S3 Bucket | yes, if the target is the AMC dataset and the dataset operation is upload | S3 bucket name | | S3 Path | yes, if the target is the AMC dataset and the dataset operation is upload | The S3 path where you store data upload | | File Name Prefix | | File upload name prefix | | Wait Until The Operation Finish? | yes, if the target is the AMC dataset and the dataset operation is upload | Wait until the operation is finished on the AMC side | | Clean Uploaded Files After Done? | | Remove all uploaded files on S3 after they are done | | **For "Rule-based Audiences" Type** | | | | Audience Name | yes, if the target is Rule-based Audiences | | | Audience Description | | | | Advertiser Id | yes, if the target is Rule-based Audiences | Advertiser ID on Amazon DSP | | SQL Statement | yes, if the target is Rule-based Audiences | Query run on AMC instance. **user_id (case sensitive) must always be part of the SELECT statement – the audience is constructed from user_ids.** **For example: select user_id from tbl;** | | Start Time | yes, if the target is Rule-based Audiences | Starting date of data to query | | End Time | yes, if the target is Rule-based Audiences | Ending date of data to query | | Refresh Rate Days | | The value of refresh rate days determines how frequently the SQL query is re-run and how completely the existing Amazon DSP audience is overwritten with new data. Only values between 0 and 21 are valid. If refresh rate days are set to 0, the audience will deactivate after 30 days. Default value: 21 | | Time Window Relative | | The time window relative parameter allows you to increment the date range of the SQL query by the refresh rate days value. Default value: false | ### Requirements for Uploading Dataset Query - To upload data to the dataset, your query result must have a column name matching the field name (case-sensitive). All mismatches between query columns and dataset fields are ignored. - The main event time column is required when we upload data to the **FACT dataset**. - All **non-nullable fields** of the dataset are required in the query. - If a dataset field is non-nullable, and a row of result queries for this field has a null value, the row is skipped. For example: - We have the FACT dataset *tutorial_off_amazon_purchases*with *purchase_time* as the main event time field. All fields are non-nullable. ![](/assets/image2023-5-13_13-6-44.30fd735def984a2517f97c97ab7ab0c84690721227ad9b22fc8abb1d187da405.56be814e.png) Sample query: ```SQL SELECT product_name, product_sku, product_quantity, purchase_time, purchase_value FROM table_name; ``` Alternatively, you can use an alias to match your query column name with dataset fields. Sample query: ```sql SELECT column_a AS product_name, column_b AS product_sku, column_c AS product_quantity, column_d AS purchase_time, column_e AS purchase_value FROM table_name ``` The data type of each column from the query result must be compatible with the dataset field. | Column Data Type | Dataset Field Data type | | --- | --- | | STRING | STRING | | DOUBLE | DECIMAL | | LONG | INTEGER | | LONG | LONG | | TIMESTAMP | TIMESTAMP (yyyy-MM-ddThh:mm:ssZ) | | TIMESTAMP | DATE (yyyy-MM-dd) | | LONG (epoch second) | TIMESTAMP (yyyy-MM-ddThh:mm:ssZ) | | LONG (epoch second) | DATE (yyyy-MM-dd) | ### Requirements for Deleting Identities from Existing Dataset Query To delete identities from all existing data sets, your query result must have at least one identity column name. The identity column name should include *first_name, last_name, email, phone, address, city, state, zip,* and*country_code***.** Columns with other names are ignored. Sample query: ```SQL SELECT first_name, last_name, email FROM table_name ``` ### Requirements for Creating Rule-based Audiences Creating an Audience doesn't require data from TD. It runs a query from the dataset and creates an audience based on the query result. So, we only use "**Select 1**" to trigger a job from the TD side. We don't allow any query to return more than one row from the TD side. ## Activate a Segment in Audience Studio You can also send segment data to the target platform by creating an activation in the Audience Studio. 1. Navigate to **Audience Studio**. 2. Select a parent segment. 3. Open the target segment, right-mouse click, and then select **Create Activation.** 4. In the **Details** panel, enter an Activation name and configure the activation according to the previous section on Configuration Parameters. 5. Customize the activation output in the **Output Mapping** panel. ![](/assets/ouput.b2c7f1d909c4f98ed10f5300df858a4b19f71a3b0834df952f5fb24018a5ea78.8ebdf569.png) - Attribute Columns - Select **Export All Columns** to export all columns without making any changes. - Select **+ Add Columns** to add specific columns for the export. The Output Column Name pre-populates with the same Source column name. You can update the Output Column Name. Continue to select **+ Add Columns**to add new columns for your activation output. - String Builder - **+ Add string** to create strings for export. Select from the following values: - String: Choose any value; use text to create a custom value. - Timestamp: The date and time of the export. - Segment Id: The segment ID number. - Segment Name: The segment name. - Audience Id: The parent segment number. 1. Set a **Schedule**. ![](/assets/snippet-output-connector-on-audience-studio-2024-08-28.a99525173709da1eb537f839019fa7876ffae95045154c8f2941b030022f792c.8ebdf569.png) - Select the values to define your schedule and optionally include email notifications. 1. Select **Create**. If you need to create an activation for a batch journey, review [Creating a Batch Journey Activation](/products/customer-data-platform/journey-orchestration/batch/creating-a-batch-journey-activation). ### (Optional) Schedule Query Export Jobs You can use Scheduled Jobs with Result Export to periodically write the output result to a target destination that you specify. Treasure Data's scheduler feature supports periodic query execution to achieve high availability. When two specifications provide conflicting schedule specifications, the specification requesting to execute more often is followed while the other schedule specification is ignored. For example, if the cron schedule is `'0 0 1 * 1'`, then the 'day of month' specification and 'day of week' are discordant because the former specification requires it to run every first day of each month at midnight (00:00), while the latter specification requires it to run every Monday at midnight (00:00). The latter specification is followed. #### Scheduling your Job Using TD Console 1. Navigate to **Data Workbench > Queries** 2. Create a new query or select an existing query. 3. Next to **Schedule**, select None. ![](/assets/image2021-1-15_17-28-51.f1b242f6ecc7666a0097fdf37edd1682786ec11ef80eff68c66f091bc405c371.0f87d8d4.png) 4. In the drop-down, select one of the following schedule options: ![](/assets/image2021-1-15_17-29-47.45289a1c99256f125f4d887e501e204ed61f02223fde0927af5f425a89ace0c0.0f87d8d4.png) | Drop-down Value | Description | | --- | --- | | Custom cron... | Review [Custom cron... details](#custom-cron-details). | | @daily (midnight) | Run once a day at midnight (00:00 am) in the specified time zone. | | @hourly (:00) | Run every hour at 00 minutes. | | None | No schedule. | #### Custom cron... Details ![](/assets/image2021-1-15_17-30-23.0f94a8aa5f75ea03e3fec0c25b0640cd59ee48d1804a83701e5f2372deae466c.0f87d8d4.png) | **Cron Value** | **Description** | | --- | --- | | `0 * * * *` | Run once an hour. | | `0 0 * * *` | Run once a day at midnight. | | `0 0 1 * *` | Run once a month at midnight on the morning of the first day of the month. | | "" | Create a job that has no scheduled run time. | ``` * * * * * - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +---------- month (1 - 12) | | +--------------- day of month (1 - 31) | +-------------------- hour (0 - 23) +------------------------- min (0 - 59) ``` The following named entries can be used: - Day of Week: sun, mon, tue, wed, thu, fri, sat. - Month: jan, feb, mar, apr, may, jun, jul, aug, sep, oct, nov, dec. A single space is required between each field. The values for each field can be composed of: | Field Value | Example | Example Description | | --- | --- | --- | | A single value, within the limits displayed above for each field. | | | | A wildcard `'*'` to indicate no restriction based on the field. | `'0 0 1 * *'` | Configures the schedule to run at midnight (00:00) on the first day of each month. | | A range `'2-5'`, indicating the range of accepted values for the field. | `'0 0 1-10 * *'` | Configures the schedule to run at midnight (00:00) on the first 10 days of each month. | | A list of comma-separated values `'2,3,4,5'`, indicating the list of accepted values for the field. | `0 0 1,11,21 * *'` | Configures the schedule to run at midnight (00:00) every 1st, 11th, and 21st day of each month. | | A periodicity indicator `'*/5'` to express how often based on the field's valid range of values a schedule is allowed to run. | `'30 */2 1 * *'` | Configures the schedule to run on the 1st of every month, every 2 hours starting at 00:30. `'0 0 */5 * *'` configures the schedule to run at midnight (00:00) every 5 days starting on the 5th of each month. | | A comma-separated list of any of the above except the `'*'` wildcard is also supported `'2,*/5,8-10'`. | `'0 0 5,*/10,25 * *'` | Configures the schedule to run at midnight (00:00) every 5th, 10th, 20th, and 25th day of each month. | 1. (Optional) You can delay the start time of a query by enabling the Delay execution. ### Execute the Query Save the query with a name and run, or just run the query. Upon successful completion of the query, the query result is automatically exported to the specified destination. Scheduled jobs that continuously fail due to configuration errors may be disabled on the system side after several notifications. (Optional) You can delay the start time of a query by enabling the Delay execution. ## (Optional) Configure Export Results in Workflow Within Treasure Workflow, you can specify the use of this integration to export data. Learn more about [Exporting Data with Parameters](https://docs.treasuredata.com/smart/project-product-documentation/exporting-data-with-parameters). ### Example Workflow for uploading a dataset ```yaml _export:   td:     database: amc_db   +amc_task:   td>: upload.sql   database: ${td.database}   result_connection: new_amc_auth   result_settings:     type: amazon_marketing_cloud target: dataset operation: upload dataset_id: dataset_id bucket: bucket_name path_prefix: path_prefix/ file_name_prefix: file_name_prefix wait_until_finish: true clean_upload_file: true ``` ### Example Workflow for creating dataset and upload ```yaml _export:   td:     database: amc_db  +amc_task:   td>: upload.sql   database: ${td.database}   result_connection: new_amc_auth   result_settings:     type: amazon_marketing_cloud target: dataset operation: upload dataset_id: dataset_id create_dataset: true dataset_definition: dataset_definition_in_json_string bucket: bucket_name path_prefix: path_prefix file_name_prefix: file_name_prefix wait_until_finish: true clean_upload_file: true ``` ### Example Workflow for deleting a dataset ```yaml _export:   td:     database: amc_db  +amc_task:   td>: delete.sql   database: ${td.database}   result_connection: new_amc_auth   result_settings:     type: amazon_marketing_cloud     target: dataset     operation: delete ``` ### Example Workflow for creating rule-based audiences ```yaml _export:   td:     database: amc_db   +amc_task:   td>: audience.sql   database: ${td.database}   result_connection: new_amc_auth   result_settings:     type: amazon_marketing_cloud target: rule_based_audiences amc_instance_id: amc_instance_id amc_account_id: amc_account_id audience_name: "test_audience" audience_description: "test audience description" advertiser_id: 123 query: "SELECT user_id FROM dsp_impressions" time_window_start: "2023-06-25T00:00:00" time_window_end: "2023-07-25T00:00:00" refresh_rate_days: 1 time_window_relative: false ``` ## External Reference - Fact vs. Dimension datasets: [https://docs.aws.amazon.com/solutions/latest/amazon-marketing-cloud-uploader-from-aws/amc-fact-compared-with-dimension-datasets.html](https://docs.aws.amazon.com/solutions/latest/amazon-marketing-cloud-uploader-from-aws/amc-fact-compared-with-dimension-datasets.html) - Dataset creation guideline: [https://advertising.amazon.com/API/docs/en-us/guides/amazon-marketing-cloud/advertiser-data-upload/advertiser-data-sets](https://advertising.amazon.com/API/docs/en-us/guides/amazon-marketing-cloud/advertiser-data-upload/advertiser-data-sets)