This Data Connector allows you to import Apptopia Data Source objects into Treasure Data.


Use TD Console

Create a new connection

In Treasure Data, you must create and configure the data connection prior to running your query. As part of the data connection, you provide authentication to access the integration.

  1. Open TD Console.

  2. Navigate to Integrations HubCatalog.

  3. Click the search icon on the far-right of the Catalog screen, and enter Apptopia.
  4. Hover over the Apptopia connector and select Create Authentication.

          The following dialog opens.

Edit your Apptopia Client ID and Secret Key information. Then click Continue. Give your connection a name, and click Done.

Create a new transfer

When you select Create Connection, you are taken to the Authentications tab. Look for the connection you created and select New Transfer.

The following dialog opens. Edit the details and select Next.

 Preview your data. If you wish to change anything, select Advanced Settings or select Next.

In Advanced Settings you can change some options such as rate limits:

 Select the database and table where you want to transfer the data, as shown in the following dialog:

Specify the schedule of the data transfer and select Start Transfer:

You will see the new data transfer in progress, listed under the My Input Transfers tab, and a corresponding job will be listed in the Jobs section.

Command Line

Install ‘td’ command v0.11.9 or later

You can install the newest TD Toolbelt.

$ td --version

Create Configuration File

Prepare configuration file (for eg: load.yml) with your Apptopia account access information, as follows:

  type: apptopia
  client: xxxxxxxx
  secret: xxxxxxxx
  target: publisher_performance (required, see Appendix B)
  store: itunes_connect (required, see Appendix C)
  start_date: 2017-01-01 (required for all targets except `category` and `country`) 
  end_date: 2017-02-01 (required for all targets except `category` and `country`)
  requests_per_minute_limit: 300 (optional, 1000 by default, see Appendix F)
  mode: replace

This example shows an Apptopia Publisher Performance Data Source:

For more details on available out modes, see the Appendix

Preview Data to Import (optional)

You can preview data to be imported using the command td connector:preview.

$ td connector:preview load.yml
| id:string  | store:string     | country_iso:string | ...
| 420233213  | "itunes_connect" | US                 |
|  | "google_play"    | JP                 |

Execute Load Job

Submit the load job. It may take a couple of hours depending on the data size. Users need to specify the database and table where their data are stored.

It is recommended to specify --time-column option, because Treasure Data’s storage is partitioned by time (see also data partitioning) If the option is not given, the Data Connector will choose the first long or timestamp column as the partitioning time. The type of the column specified by --time-column must be either of long and timestamp type.

If your data doesn’t have a time column you can add it by using add_time filter option. For more details, see add_time Filter Plugin for Integrations.

$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column updated_date

The preceding command assumes that you already created database(td_sample_db) and table(td_sample_table). If the database or the table does not exist in TD, the command will not succeed, so create the database and table Database and Table Management or use --auto-create-table option with td connector:issue command to auto create the database and table:

$ td connector:issue load.yml --database td_sample_db --table td_sample_table --time-column updated_date --auto-create-table

You can assign Time Format column to the "Partitioning Key" by "--time-column" option.

Scheduled Execution

You can schedule periodic Data Connector execution for periodic Apptopia import. We configure our scheduler carefully to ensure high availability. By using this feature, you no longer need a cron daemon on your local data center.

Create the Schedule

A new schedule can be created using the td connector:create command. The name of the schedule, cron-style schedule, the database and table where their data will be stored, and the Data Connector configuration file are required.

$ td connector:create \
    daily_apptopia_import \
    "10 0 * * *" \
    td_sample_db \
    td_sample_table \

The `cron` parameter also accepts the options: `@hourly`, `@daily` and `@monthly`.

By default, schedule is setup in UTC timezone. You can set the schedule in a timezone using -t or --timezone option. The `--timezone` option supports only extended timezone formats like 'Asia/Tokyo', 'America/Los_Angeles' etc. Timezone abbreviations like PST, CST are *not* supported and may lead to unexpected schedules.

List the Schedules

You can see the list of scheduled entries by td connector:list.

$ td connector:list
| Name                  | Cron         | Timezone | Delay | Database     | Table           | Config                      |
| daily_apptopia_import | 10 0 * * *   | UTC      | 0     | td_sample_db | td_sample_table | {"type"=>"apptopia", ... }  |

Show the Setting and History of Schedules

td connector:show shows the execution setting of a schedule entry.

% td connector:show daily_apptopia_import
Name     : daily_apptopia_import
Cron     : 10 0 * * *
Timezone : UTC
Delay    : 0
Database : td_sample_db
Table    : td_sample_table

td connector:history shows the execution history of a schedule entry. To investigate the results of each individual execution, use td job <jobid>.

% td connector:history daily_apptopia_import
| JobID  | Status  | Records | Database     | Table           | Priority | Started                   | Duration |
| 578066 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-18 00:10:05 +0000 | 160      |
| 577968 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-17 00:10:07 +0000 | 161      |
| 577914 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-16 00:10:03 +0000 | 152      |
| 577872 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-15 00:10:04 +0000 | 163      |
| 577810 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-14 00:10:04 +0000 | 164      |
| 577766 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-13 00:10:04 +0000 | 155      |
| 577710 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-12 00:10:05 +0000 | 156      |
| 577610 | success | 10000   | td_sample_db | td_sample_table | 0        | 2015-04-11 00:10:04 +0000 | 157      |
8 rows in set

Delete the Schedule

td connector:delete removes the schedule.

$ td connector:delete daily_apptopia_import


Modes for Out Plugin

You can specify file import mode in out section of load.yml.

append (default)

This is the default mode and records are appended to the target table.

  mode: append

replace (In td 0.11.10 and later)

This mode replaces data in the target table. Any manual schema changes made to the target table remains intact with this mode.

  mode: replace

Available Targets




Application metadata


Application performance


Application ranking


Application SDKs metadata


List of categories


Raw ranks top charts for each category


Featured applications for each category


New app releases for each category


Supported countries


Publisher metadata


Publisher performance


SDK metadata

Available Markets




Apple Store


Google Play Market

Rate Limits

There is requests per minute rate limit in Apptopia. This rate limit is auto refreshed after a certain number of seconds.

If you have multiple transfers under the same Apptopia account, you can control the rate limit usage of each transfer via requests_per_minute_limit in advanced settings as long as the total limit does not exceed your account limit. For example, assume that your account has quotas as 1000 calls/minute, if you create 2 transfers, e.g. app_performance & publisher_performance targets, you could use 600 rpm for app_performance transfer and the rest (400 rpm) for publisher_performance transfer.