This is a summary of Treasure Data features and improvements introduced in the March 1st, 2020 release. If you have any product feature requests, submit them to feedback.treasuredata.com.
Optionally, you can view a video summary of our March releases:
- Tracking the Customer Journey with Funnels (Beta)
- Identity Federation for Okta (Beta)
- Easier to Re-Run a Workflow
- Data Connectors
- Facebook Lead Ads Import Integration (Beta)
- Sprinklr Import Data Connector
- Azure Blob Cloud Storage Input Improvement
- Google Sheets Export Integration Improvement
- Twitter Tailored Audience Export Integration Improvement
- Hive2
- New Location for Product Documentation
- Bug Fixes and Other Improvements
Funnels, a new feature in Audience Studio, makes it easier for marketers to see exactly where their customers are in the customer journey, by allowing them to create more customer-tailored marketing campaigns using multi-channel activations.
The Funnels feature enables users to create and customize a funnel with up to 8 stages in the Audience Studio to reflect the exact buying experience of their customers. After the Funnel stages are created, marketers can further analyze and refine the stages and activate specific stages for campaigns.
Funnels will be available as a BETA release beginning in mid-March. If you are interested in learning more about adding this feature, contact Technical Support at support@treasuredata.com.
More details about how to use this new feature are available in Working with Funnels.
Treasure Data now supports a second Identity Provider (IdP) in its Identity Federation offering: Okta using SAML 2.0 protocol.
Treasure Data works with Okta to enable your TD account users to use one ID to log into your Treasure Data accounts, even if the user is assigned multiple TD accounts. You configure your Okta IdP to authenticate your Treasure Data users and thereby control the login policy for your users through the IdP.
Identity Federation’s key benefit is providing heightened security and tighter authentication for both on-premise and cloud applications. You can centrally manage all users and their respective permissions through your corporate directory service.
If you are interested in participating in the Identity Federation beta, contact your Customer Success Representative.
In TD Console, you can re-run any workflow session from the Treasure Workflow status details page, including workflow sessions that no longer appear in the Run History tab.
In the upper right corner of the workflow status details page, you can click Run or click Run Earlier Revision. In the Run Earlier Revision pane, enter the sessionID of the workflow session that you want. By default, the most recent revision is provided.
Ingest Facebook Leads data directly from Facebook Graph API into TD database for activation. The input connector supports ingestion by Form ID/Ads ID incrementally to provide quick update of leads data.
If you are interested in participating in the beta, contact your customer success representative.
Treasure Data enables you to stitch Sprinklr data from Facebook or Twitter channels with the Audience Studio user profiles that you have in Treasure Data.
The Sprinklr data connector for import supports Sprinklr’s Report API (https://developer.sprinklr.com/docs/read/api_10/reports/Report_Read).
Use of the API makes the Sprinkler integration with Treasure Data seamless.
Incremental loading is now supported in the Azure Blob Cloud Storage data connector for import. You can specify the last_path to start ingesting files from. We also fixed the behavior so that the last_path file will be excluded when loading data. If you did not update your existing workflow logic, contact our support. For more information, refer to Microsoft Azure Blob Storage Import Integration.
A new version of Google Sheet data connector for export is available. This improved connector is fully compatible Google Drive V3 API interchangeably and was developed in response to the Google Sheets V3 API deprecation scheduled for early March 2020.
You must re-authenticate all your existing Google Sheet Authentications to add Google Drive metadata access permission. Failure to re-authenticate results in a disruption of your Google sheets output jobs. For migrating instructions, see the Appendix in Google Sheets Export Integration.
A new version of Twitter Tailored Audience for export is available, but no action is required. This improved connector is fully compatible with Twitter’s Ads API v6 and was developed in response to the Ads v5 API deprecation.
Hive2 supports the following new UDFs.
TD_INTERVAL (This is the same UDF as Presto supported)
Usage: TD_INTERVAL(int/long time, string interval_string, [, string default_timezone = 'UTC']) → Boolean
Example:
- SELECT ... WHERE TD_INTERVAL(time, '-7d')
TD_TIME_FORMAT supports the week year format
Usage: TD_TIME_FORMAT(long unix_timestamp, ‘wyear’[, string timezone = 'UTC'])
Example:
- TD_TIME_FORMAT(1582859260, 'wyear') → 2020
TD_URL_ENCODE / TD_URL_DECODE
Usage: td_url_encode(value) → string
TD_URL_ENCODE: Escapes value by encoding it so that it can be safely included in URL query parameter names and values
TD_URL_DECODE: Unescapes the URL encoded value. This function is the inverse of td_url_encode().
TD_PIVOT
- Usage: TD_PIVOT(key column, value column, ‘key_value1,key_value2’)
PIVOT by Hive 0.13 requires the following SQL:
SELECT
uid,
kv['c1'] AS c1,
kv['c2'] AS c2,
kv['c3'] AS c3
FROM (
SELECT uid, to_map(key, value) AS kv
FROM vtable
GROUP BY uid
) t;You can now get the same result by using the following SQL:
For string columns:
SELECT uid, element_at(COLLECT_LIST(c1), 0) AS c1, element_at(COLLECT_LIST(c2), 0) AS c2, element_at(COLLECT_LIST(c3), 0) AS c3 FROM vtable LATERAL VIEW TD_PIVOT(key, value, 'c1,c2,c3') t GROUP BY uid;For numeric columns:
SELECT uid, SUM(c1) AS c1, SUM(c2) AS c2, SUM(c3) AS c3 FROM vtable LATERAL VIEW TD_PIVOT(key, value, 'c1,c2,c3') t GROUP BY uid;TD_UNPIVOT
- Usage: TD_UNPIVOT(‘key_name1, key_name2, ...’, value_column1, value_column2, ...)
UNPIVOT by Hive 0.13 requires the following SQL:
SELECT uid, 'c1' AS key , c1 AS value FROM htable
UNION ALL
SELECT uid, 'c2' AS key , c2 AS value FROM htable
UNION ALL
SELECT uid, 'c3' AS key , c3 AS value FROM htable
But, now you can get the same result by using the following SQL:
SELECT uid, t1.key, t1.value FROM htable
LATERAL VIEW TD_UNPIVOT('c1,c2,c3',c1,c2,c3) t1
NOTE: These UDFs are not supported by Hive 0.13
Treasure Data’s technical content is moving to a new delivery platform, improving your experience in the following areas:
Search
Watch capability
Organization of the content
Formatting of numbered lists
We welcome your feedback on the delivery platform, structure of information or technical content improvements. More improvements to the content are to come, as we move forward. Let us know what you’d like to see now or in the future. We hope you enjoy it as much as we do.
You can submit your comments about the documentation platform through your Treasure Data Customer Success representative.
Fixed a minor issue the job result Download feature
- CSV header row was not converted to SJIS encoding properly. But, now it works.
Added validation for BigQuery Input Connector configuration
- Using "Import Large Dataset" option requires a same location for BigQuery and GCS to store temp data
Added validation for Google Sheet Syndication
- The Start Location parameter now doesn’t allow invalid position values
Fixed the Database Selector behavior at Saved Query Editor
- Users now can save and run a saved query if the assigned database was deleted
Improved the auto-completion functionality in Query Editor to support table name and database name
Improved error handling for Custom Scripting when a user selects “Rate exceeded (Service: AWSLogs; Status Code: 400; Error Code: ThrottlingException)”
Fixed an issue with output log messages from Custom Scripting.
- In some cases, the standard output message from Custom Scripting didn’t appear