When interacting with AI Agents, such as those created by Treasure Data, partners, you, your company, or others, users may receive a response from the AI agent that is inaccurate, misleading, or false.
The output of the interaction with the agent along with its accuracy is fundamentally dependent on the configuration of the knowledge bases to which the agent(s) is integrated and configured. For example, should the knowledge base be out of date, improperly configured, or missing data, the analysis and results of the agent would be predicated on such inaccurate or incomplete information and would return a potentially inaccurate, misleading, or false answer.
Other times the AI Agent may “hallucinate” or “confabulate,” providing an answer that, despite its programming, is objectively false or misleading. This may alternatively be an unintentional consequence of the technology that powers the AI.
Despite these limitations inherent fundamentally in the technology, Treasure Data has conducted a rigorous evaluation and chosen the LLM and architecture that should be most suitable and best performing for the type of output expected. Treasure Data chose Anthropic’s Claude suite of LLMs because of this as well as its integrated Constitutional AI (More, here: https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback) which provides an additional layer of governance and security.
It is important that users should exercise their best (human) judgement when interacting with AI Agents and, as appropriate, cross-check responses with other sources. Because of the technology's inherent limitations, Treasure Data cannot guarantee the accuracy, completeness, fitness for purpose or reliability of output, or that the output will be free from omissions or errors. Our Customer (ie. the organization for which you - the user of our AI Services - work) is responsible for the use of the output and for evaluating, editing, and amending it as appropriate before using or relying on it. You should familiarise yourself with your employer’s policies to understand to whom within your organization any role and responsibility in this area are allocated.
Note that the use of our AI Services by you and the Customer is in all cases subject to Treasure Data AI Acceptable Use Policy and the AUPs of our suppliers (linked to in that document). Please take the time to read them.
In the event you intend to use any output produced with Treasure Data AI Services in a way that may have legal or other significant effects on individuals, your organisation must ensure that the final decision is made by a qualified person reviewing such output and taking into considerations factors beyond any response or recommendation obtained through the use of Treasure Data AI Services. Your AI is only good as the data being used to power it!
To best use and interpret AI System outputs, please consider the following practical tips for interacting and reviewing AI System interactions:
When interacting with the AI System, type and provide as much context and information as possible and be specific. Vague or short inputs from the user may produce less-than-optimal results from the AI System. Many AI systems will ask for clarification where needed but neither rely nor expect the AI system to proactively ask this of you.
Prior to the AI System returning an answer or response, consider what an appropriate and reasonable response may look like.
Should the response be unexpected, you may ask the Agent to explain its rationale and reasoning, or to re-attempt the task that it just performed.
- To further evaluate the output, please refer to the article How do Treasure Data’s AI agents come up with a response or decision?
Always apply your best human judgement when evaluating the response - ask a trusted supervisor if something seems off or unexpected.
The AI system performs best with frequent, short interactions on a single topic or line of inquiry, versus one long, multi-part discussion.
If something seems amiss or if things are not going the way you anticipated or had hoped, please close that session and try again in a new session.
Treasure Data Currently Supports the Following Foundational Models:
Anthropic Claude Sonnet 3.5 v1 - used by default in our Audience Agent as well as - an option for use in agents created in AI Agent Foundry
Anthropic Claude Sonnet 3 - an option for use in agents created in AI Agent Foundry
Anthropic Claude Haiku 3 - an option for use in agents created in AI Agent Foundry
More information about the AI Models in use with Treasure Data can be found at Anthropic.com and the following Model Cards:
Finally, questions or feedback about this or other topics can be submitted through the in-product Feedback and Bug Reporting capabilities or through communication via Customer Support, the Customer’s Account Team, or via the email alias Gen.AI@Treasure-data.com.