Treasure Data provides a cloud-based analytics infrastructure accessible via SQL. Interactive engines like Presto enable you to crunch billions of records easily. However, writing a SQL query is sometimes painful for data scientists, and you’ll still need to use external tools like Excel or Tableau to visualize the result. You can use Treasure Data with the Python-based data analysis tool called Pandas and visualize the data interactively via Jupyter Notebook.
Prerequisites
Basic knowledge of Python.
Basic knowledge of Treasure Data.
Set Treasure Data API Key
Set your master API key as an environment variable before launching Jupyter. The master API KEY can be retrieved from the TD Console profile.
$ export TD_API_KEY="1234/abcde..."
You can set your environment variables with a command such as the following in Jupyter Notebook cell.
%env TD_API_KEY = "123c/abcdefghjk..."
Set Treasure Data API Endpoint
Set Treasure Data API Endpoint as an environment variable if your account does not belong to the US Region. You can see Endpoint info here
$ export TD_API_SERVER="https://api.treasuredata.co.jp"
You can set your environment variables with a command such as the following in Jupyter Notebook cell.
%env TD_API_SERVER = "https://api.treasuredata.co.jp"
Install the Necessary Packages and Configure your Environment
For more information and instructions, see Installing Conda, Pandas, matplotlib, Jupyter Notebook, and pytd.
Run Jupyter and Create your First Notebook
We’ll use Jupyter as a frontend for our analysis project.
Run Notebook using the following syntax:
(analysis)$ ipython notebook
Your web browser will open:
Select New > Python 3.
Copy and paste the following text into your notebook:
%matplotlib inline import os import pandas as pd import pytd.pandas_td as td # Initialize the connection to Treasure Data con = td.connect(apikey=os.environ['TD_API_KEY'], endpoint='https://api.treasuredata.com')
Your notebook should now look similar to
Type Shift-Enter.
If you get "KeyError: 'TD_API_KEY'" error, try "apikey='<your master apikey>'" instead of "apikey=os.environ['TD_API_KEY']".
If it works, Jupyter didn't recognize the TD_API_KEY variable from the OS.
Confirm the TD_API_KEY again and re-launch Jupyter.Optionally, save your notebook.
Explore Data
There are two tables in sample_datasets
. You can use the magic command td_tables
to view all the tables in your database.
Let’s explore the nasdaq
table.
In Jupyter, type the following syntax:
engine = td.create_engine("presto:sample_datasets") client =td.Client(database='sample_datasets') client.query('select symbol, count(1) as cnt from nasdaq group by 1 order by 1')
For example:
Running a Query in Jupyter
For the purposes of this example, Presto is used as the query engine for this session.
In Jupyter, type the following syntax:
import pytd.pandas_td as td con = td.connect(apikey=apikey, endpoint="https://api.treasuredata.com") engine = td.create_engine("presto:sample_datasets") td.read_td_query(query, engine, index_col=None, parse_dates=None, distributed_join=False, params=None)
You can also use the time_range
parameter to retrieve data within a specific time range:
Your data is stored in the local variable df
as a DataFrame. Because the data is located in the local memory of your computer, you can analyze it interactively using the power of Pandas and Jupyter. See Time Series / Date functionality for the details of time-series data.
Sample Data
As your data set grows very large, the method from the previous step doesn’t scale very well. We don't recommend that you retrieve more than a few million rows at a time due to memory limitations or slow network transfer. If you’re analyzing a large amount of data, you need to limit the amount of data getting transferred.
There are two ways to do this:
You can sample data. For example, the “Nasdaq” table has 8,807,278 rows. Setting a limit of 100000 results in 100,000 rows, which is a reasonable size to retrieve:
Write SQL and limit data from the server-side. For example, as we are interested only in data related to “AAPL”, let’s count the number of records, using
read_td_query
:It’s small enough, so we can retrieve all the rows and start analyzing data:
See the contents below for further information.
Jupyter Notebooks are supported by GitHub and you can share the result of your analysis session with your team: