Skip to content
Last updated

About the Hive Engine

Apache Hive is a data warehouse system built on top of Apache Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in various databases and file systems that integrate with Hadoop. Hive offers a simple way to apply structure to large amounts of unstructured data and then perform batch SQL-like queries on that data.

The Treasure Data Hive service (TD Hive) provides batch data processing on data stored in Treasure Data’s data lake, based on Apache Hive. TD eliminates the need to run your own Hadoop clusters to handle Hive processing. Instead, Treasure Data operates compute clusters for running Hive jobs.

The Customer Data Platform (CDP) application uses Hive jobs for some of its internal operations, and you can also run your own Hive jobs. You can submit SELECT or DML queries using Hive’s query language, using TD Console, API calls, the TD Toolbelt, or from TD workflows. The service queues, executes the queries, and returns the results. You can also design your system so that results are delivered to destinations specified in your Result Output.

The Hive query language (HiveQL) is the primary data processing method for Treasure Data. HiveQL is powered by Apache Hive

Treasure Data is a CDP that allows users to collect, store, and analyze their data on the cloud. Treasure Data manages its own Hadoop cluster, which accepts queries from users and executes them using Apache Tez. HiveQL is one of the languages it supports.

Hive and HiveQL Differences

  • TD Hive supports the flexible schema capabilities of the TD platform; schema can be inferred from data loaded into tables or can be set explicitly.
  • Treasure Data supports HiveQL semantics, but unlike Apache Hive, Treasure Data allows the user to set and modify the schema at any time. We do not require that a table schema be defined upfront.

Hive and Other Hadoop-Based Resource Pools

Hive Resource Pools allow you to divide resources available for Hive into pools to be used for specific workloads. You can then organize their usage across project, groups, or use cases.

Resource pools are helpful for the following challenges:

  • Ensuring that high priority jobs receive the resources they need to run within a strict SLA. For example, a batch job that must finish within a specific overnight time window.
  • Always saving some resources for ad hoc jobs that have to run with lowest possible latency.
  • Limiting the maximum resources and cost of running lower priority jobs.

You can guarantee the minimum and maximum resources to allow for jobs that are running in specific resource pools.

This feature is enabled upon request. Contact TD Support or your primary account representative.

Understanding Hadoop Resources and Resource Pools

Treasure Data’s Hadoop cluster is shared among many customers. Each customer starts with a single queue for all submitted Hadoop jobs, such as Hive queries, which are processed as these resources permit. A number of parallel processes, based on your plan, is dedicated to processing those jobs. The number of cores is determined based on Hadoop Compute Units, as follows:

  • A customer gets a total of 2 cores minimum guaranteed processing per compute unit at all times. For example, a customer with 20 Hadoop compute units gets 40 cores of minimum processing at all times.
  • During off-peak periods on Treasure Data’s Hadoop cluster (across all customers), a customer may be granted up to 4x their guaranteed compute cores, to process their jobs faster.

When you create multiple resource pools, you designate the minimum percentage of resources to devote to them. For example, you might create three pools:

  • ad hoc assigned 25% of plan capacity
  • batch assigned 65% of capacity
  • Best_effort assigned 10% of capacity

Based on the assigned percentage, a minimum of CPU cores is guaranteed for the jobs. So:

  • ad hoc jobs will always get at least 10 (=25% of 40) cores
  • batch jobs will always get at least 26 (=65% of 40) cores
  • Best_effort jobs will always get at least 4 (=10% of 40) cores

The values are rounded down to the nearest whole number of cores. Resource pools are guaranteed a minimum of one core, unless configured to 0% allocation. During off-peak periods, jobs might be granted up to the full off-peak capacity of the whole plan. For example, any job could be granted up to 160 (=40*4) cores if the overall Treasure Data Hadoop workload permits.

If a job from one resource pool is running with more than its guaranteed resources and a job is submitted to another pool for which there are not currently enough resources, some portion of the processing for the first job may be preempted or deferred. This preemption does not mean that the job has to restart, only that it might take longer to run to completion. Jobs are always granted their minimum guaranteed resources.

Default Resource Pool Configuration

The default resource pool configuration for a Hadoop plan is for there to be one resource pool, named hadoop2 or hdp2. The default is configured with 100% resource.

To determine your current configured resource pools, contact Treasure Data support.

Submitting a Job with a Resource Pool Name that is not Configured

The job will run in the default resource pool with its configured available resources. The default is configured for 100% resource usage.

Selecting the Resource Pool Your Job Runs On

If resource pools are configured, and no resource pool is specified for a specific job, the job runs in the default resource pool. The default is configured with 100% resource and named hadoop2 or hdp2.

If you want to choose a specific resource pool for jobs that do not have a specified resource pool, contact Treasure Data support.

If you already have a resource pool name specified by a Hive query hint, providing a conflicting --pool-name argument on the command line causes the job to fail.

TD Console Option

For Hive queries, you can specify a query hint or magic comment to specify a resource pool. For example:

-- @TD pool_name: batch
select count(1) from mytable;

Command-Line Options

To specify a resource pool for a Hive query, table export or bulk import job at the command line, add the -—pool-name argument to td. For example:

A Hive query can be run as follows:

td query --type hive --database <database_name> --pool-name <resource_pool_name> "select count(1) from mytable"

A table export can be run as follows:

td table:export example_db table1 --s3-bucket mybucket -k KEY_ID -s SECRET_KEY --pool-name batch