Release Note 20141104

Table of Contents

Features & Improvements

This is the only public facing feature we introduced in this release:

Backend: SSL Support for Result Output to MySQL

Now Query results can be written into an external MySQL RDMS using an SSL encrypted connection.

This ensures usernames and passwords for the destination RDMS are sent through an encrypted connection and to ensure a minimum level of security for the data exchange. In order to enable SSL communication, please specify the ‘ssl=true’ parameter. SSL is disabled by default:

mysql://username:password@host/database/table?ssl=true

For reference, please refer to the Writing Job Results into your MySQL Tables documentation page.

Bug Fixes

These are the most important Bug Fixes made in this release:

APIs: Aggregation Jobs

  • [Problem]
    The status of legacy Aggregation job types could not be retrieved via neither the REST and Console APIs. Doing so returned a response with error code 500.
    [Solution]
    Aggregation jobs have been deprecated more than two years ago. Since then, the APIs changed to take advantage of single-table inheritance but the model for Aggregation jobs was not maintained.
    The Aggregation jobs' model was added to make sure details about these types of legacy jobs can be still retrieved.

APIs: Unsupported Job Result Formats

  • [Problem]
    Retrieving the result of a job with a format that does not belong to the list of supported formats (e.g. /v3/job/result/?format=unsupported, with csv, tsv, json, msgpack, msgpack.gz formats currently supported) caused a 500 response error code.
    [Solution]
    This is due to calling Rails' redirect method twice in the same controller method when the format requested doesn’t match any of the supported ones, which Rails does not allow. This caused the request to return a 500 error code.
    We modified the logic to return a Rails standard 406 HTTP error code for any format that is not supported.

APIs: Auto Schema Detection Issue With Long Schema

  • [Problem]
    When the length of the schema exceeds the maximum limit, new columns cannot be added to the schema and import callbacks fail, causing the table count to stop getting updated.
    [Solution]
    The schema of a table is stored in string as JSON format and the maximum size limit is 65536 characters. When this limit is reached, attempts to add new columns to the schema will fail validation during the import complete callback, thus terminating execution of the callback before the table’s record count is updated.
    We modified the logic to prevent the schema length to ever exceed 65536 by skipping all the new columns until the updated schema fit the maximum size requirements. This makes sure the callback is successful and the table’s record count can be maintained current. When this limit his encountered, users are invited to switch the table to ‘Manual schema’ mode in order to more precisely manage which columns should be part of the table’s schema.

Presto: MessageUnpacker Improvements

  • [Problem]
    Some queries can occasionally fail with no particular failure pattern.
    [Solution]
    We found that root cause to be a problem with decoding a (Msgpack) result that contained non-unicode (Japanese) characters.
    We improved the ‘MessageUnpacker.unpackString’ method to handle this type of boundary decoding errors to make sure the execution of Presto queries can be maintained deterministic.

Last modified: Nov 07 2014 01:21:53 UTC

If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels.