Use the DataRobot integration to interactively fetch data. Treasure Data acts as a data source for data modeling done in DataRobot.

DataRobot democratizes data science and automates the end-to-end process for building, deploying, and maintaining AI at scale. Powered by the latest open source algorithms, and available in the cloud, on-premise, or as a fully-managed AI service, DataRobot enables you to use AI to drive better business outcomes. 

Fetch Data from Treasure Data to DataRobot

Configure a TD-Hive JDBC driver or Presto JDBC driver setting in the DataRobot Data Connections setting and select Add new data connection.
If you would like to use Hive, choose Treasure Data. If you would like to use Presto, choose Presto.

Select URL(Advanced).

Provide a Data connection name and JDBC URL. For Treasure Data, the connection URL is a database name. You must change the connection URL to your database name in Treasure Data. The example of connecting to the endpoint in the US region follows.

  • Presto: jdbc:presto://​(database_name)​?SSL=true

  • Hive: jdbc:td://;useSSL=true;type=hive

After you create the data connection, select Test connection to test the connection.

For credential, enter the information below.

  • Presto: Enter your Treasure Data Master API key in Username and dummy string in Password.

  • Hive: Enter your Treasure Data login email address and your Treasure Data login password.

DataRobot is ready to be used with Treasure Data to build a batch prediction mode.

Export the Result of Prediction to Treasure Data

If you want to export the result of the prediction with DataRobot to TreasureData, you can use the DataRobot Python client and Treasure Workflow with the custom script function.

If you are interested in using this integration, contact Support.