Writing Job Results into AWS Redshift

This article explains how to write job results to your existing Amazon Redshift cluster.

Table of Contents

Prerequisites

  • Basic knowledge of Treasure Data, including the toolbelt;
  • An Amazon Redshift cluster setup and running – either a single or multi node cluster;
  • At least ‘Query only’ privileges to the Treasure Data table to be queried.

Architecture

A front-end application streams data to be collected in Treasure Data via Treasure Agent. Treasure Data periodically runs jobs on the data, then writes the job results to your Redshift cluster.



The one depicted above is a fairly common architecture which then enables data analysts well versed in using Redshift to focus on their queries and visualizations without having to worry about how the data gets uploaded there.

Untitled-3
You can limit the access to your database by using a list of static IPs. Please contact support@treasuredata.com if you need it.

Amazon Redshift configuration

Amazon Redshift can be configured in single node mode or multi node/cluster mode. The multi node configuration provides more query computation power by means of parallelization of the query execution on the available nodes.

Result Output URL

Format

The result output target is represented by URL with the following format:

redshift://<username>:<password>@<hostname>:<port>/<database>/<table>

where:

  • redshift is identified for result output to Redshift;
  • username and password are the credential to the Amazon Redshift instance. These credentials are specified when first creating the Redshift cluster and they are different from the S3 public/private access keys;
  • hostname can be retrieved from the Redshift configuration page: typically it has this format <name>.<instance_id>.<region>.redshift.amazonaws.com. The name is the one provided for the cluster, the instance id is auto generated by Amazon Redshift upon creation of the cluster, the region is the Amazon availability zone of choice. If you are using IP address instead of hostname, you need to set region option explicitly;
  • port is the port number through which the Redshift server is accessible. “:” is optional and assumed to be 5439 by default. It may vary for multi node cluster configuration and the actual value can be retrieved from the Redshift cluster detail page;
  • database is the name of the database specified at the creation of the Redshift cluster. It can be retrieved from the Redshift cluster detail page;
  • table is the name of a table within the above mentioned database. It may not exist at the moment the query output is execute, in which case a table with the specified name will be created for the user.

Options

Result output to Redshift supports various options that can be specified as optional URL parameters. The options are compatible with each other and can be combined. Where applicable, the default behavior is indicated.

ssl option

ssl option determines whether to use SSL or not for connecting to Redshift.

ssl=true

Use SSL from Treasure Data to Redshift connection.

redshift://user:password@host/database/table?ssl=true
ssl=false (default)

Do not use SSL from Treasure Data to Redshift.

redshift://user:password@host/database/table?ssl=false

schema option

Controls the schema the target table is located. If not specified default schema is to be used. The default schema depends on user’s “search_path” setting but it is usually “public”.

redshift://user:password@host/database/table?schema=target_schema

region option

Specify the region where your Redshift instance is located. This option is required if your hostname does not contain the region name.

redshift://user:password@host/database/table?region=us-east-1

Update mode option

Controls the various ways of modifying the database data. The 4 modes are:

  • Append
  • Replace
  • Truncate
  • Update
mode=append (default)

The append mode is the default which is used when no mode option is provided in the URL. In this mode the query results are appended to the table. If the table does not exist, it will be created.

The append mode is not atomic because it does not use a temporary table to store the incoming data but rather adds the records directly to the destination table. This is done for efficiency purpose (speed mostly) but poses a risk of finding the destination table in an intermediate state if the write fails during the result output process.

Append mode also does modify the schema of existing tables: if the source table columns' schema mismatches (either column names, types, or both) that of the destination table in Redshift, the mismatching columns are dropped from the result output write. When the destination table does not exists instead, the table will be created with matching schema to the source table.

Because mode=append is the default behavior, these two URLs are equivalent:

redshift://user:password@host/database/table
redshift://user:password@host/database/table?mode=append
mode=replace

The replace mode consists of replacing the entire content of an existing table with the result output of the query. If the table does not exist yet, a new table will be created.

The replace mode achieves atomicity (so that a consumer of the table always has consistent data) by performing the following three steps in a single transaction:

  1. Create a temporary table;
  2. Write to the temporary table;
  3. Replace the existing table with the temporary table using ALTER TABLE RENAME.

Example:

redshift://user:password@host/database/table?mode=replace
Untitled-3
For fastest results, we recommend you have only one **write** connection open to your RDS database table at a time.
mode=truncate

With the truncate mode the system first truncates the existing table to remove all records, then inserts the query results into the existing table without modifying the table’s schema.

If the event the result output table contains columns whose name or type (or both) mismatches from the schema of the destination table, the columns will be dropped, the will not get written into the destination table. This is not a problem when the destination table does not exist: a new table will be created with schema matching that of the result output table.

The truncate mode uses a temporary table, hence achieves atomicity of write.

Example:

redshift://user:password@host/database/table?mode=truncate
Untitled-3
Unlike replace, the truncate mode retains the indexes of the table.
mode=update

In the update mode a row is inserted unless it would cause a duplicate value in the columns specified in the “unique” parameter: in such case an update is performed instead. Please note that the “unique” parameter is required when using the update mode. If the table does not exist, it will be created.

This mode is atomic because it uses a temporary table to store the incoming date before attempting to insert the data in the actual destination table.

Example:

redshift://...?mode=update&unique=col1        # single unique column
redshift://...?mode=update&unique=[col1,col2] # multiple unique columns

Usage

Only the CLI supports query result output to Redshift at the moment.

CLI

To output the result of a single query to Amazon Redshift add the --result option to the td query command. After the job is finished, the results will be written into your instance.

$ td query -w -d testdb \
  --result 'redshift://username:password@host.redshift.amazonaws.com/database/table?mode=replace' \
  "SELECT code, COUNT(1) FROM www_access GROUP BY code"

To create a scheduled query whose output is systematically written to Redshift add the --result option when creating the schedule using the td sched:create command:

$ td sched:create hourly_count_example "0 * * * *" -d testdb \
  --result 'redshift://username:password@host.redshift.amazonaws.com/database/table?mode=replace' \
  "SELECT COUNT(*) FROM www_access"

Tips

When your Redshift didn’t get any data even job of Result Output to Redshift was success

Result Output to Redshift try to convert column type if the destination table already exists. If conversion failed, column becomes NULL, and all records will be rejected if the column on Redshift is NOT NULL field.

Supported Regions

Result Output to Redshift is able to export data to several regions. Current supported regions are the followings:

  • us-east-1 (US Standard)
  • us-west-2 (Oregon)
  • eu-west-1 (Ireland)
  • ap-northeast-1 (Tokyo)

If you have other regions, please contact to support.


Last modified: Mar 21 2017 06:30:56 UTC

If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels.