# How Treasure Data's AI Systems Work Treasure Data’s AI system was designed with security and privacy in mind, along with the most straightforward architecture that provides for the greatest amount of capabilities and future development. To this end, the system could be conceptualized as working in several different paradigms described below: ## Single Agent Call to a Large Language Model (LLM) The most simple interaction with the AI system is when the user is interacting with just one agent and that agent will directly call the LLM. It may perform the following steps: 1. The user inputs a user prompt into the system. 2. The user prompt and system prompt are sent to service hosting LLM (e.g. Amazon Bedrock). 3. The service hosting the LLM provides a response back to the agent. 4. The agent passes the response back to the UI which is displayed to the user. In this case, the output is generated solely using the LLM’s capabilities to return a response that satisfies its objective function on which it was trained with the user prompt and system prompt as context. ## Single Agent Call to a LLM with Retrieval Augmented Generation (RAG) to a Knowledge Base A more sophisticated interaction uses the LLM to understand the user intent for a query that would be best suited with a Knowledge Base-grounded answer, leverages the RAG architecture to directly query the Knowledge Base and return and then interpret the results. It may perform the following steps: 1. The user inputs a user prompt into the system. 2. The user prompt and system prompt are sent to service hosting LLM (e.g. Amazon Bedrock). 3. The service hosting the LLM provides a response back to the agent. 1. The response would include converting natural language requests into machine-readable (e.g. SQL) code. 4. The code is passed by the agent to an environment within Treasure Data that executes the code (e.g. a query), waits for it to complete (e.g. the query to run), and then returns the response (e.g. the query response) back to the agent that then sends it to the service hosting the LLM as additional context. 5. The service hosting the LLM then crafts a response using the output of the code/query along with prior context. 6. The agent passes the response back to the UI which is displayed to the user. ## Supervisor Agent & Sub-agent Call to a LLM Another sophisticated interaction uses a supervisor agent to leverage the LLM to identify a sub-agent that has a narrowly defined task to execute and return a response back to the supervisor agent. It may perform the following steps: 1. The user inputs a user prompt into the system. 2. The user prompt and system prompt in the context of the supervisor agent are sent to service hosting LLM (e.g. Amazon Bedrock). 3. The service hosting the LLM provides a response back to the supervisor agent that requires it to call a sub-agent that was pre-configured in the AI Agent Foundry. 4. The context is passed to the sub-agent which leverages the user prompt and system prompt of the Supervisor agent, along with the system prompt of the sub-agent. 5. The Service hosting the LLM provides a response back to sub-agent. 6. The sub-agent passes back the output to the supervisor agent. 7. The supervisor agent passes the response back to the service hosting the LLM to ensure that the user’s intent was fulfilled. 8. The service hosting the LLM then crafts a response with all the prior steps as context. 9. The agent passes the response back to the UI which is displayed to the user. ## Supervisor Agent & Subagent Call to a LLM with RAG to a Knowledge Base One final harmonization of agents encompasses using an orchestration of multiple agents where certain agents have specific tools available, such as calling the RAG to a knowledge base. A similar architecture and process is used in Treasure Data’s Audience Agent. 1. The user inputs a user prompt into the system. 2. The user prompt and system prompt in the context of the supervisor agent are sent to service hosting LLM (e.g. Amazon Bedrock). 3. The service hosting the LLM provides a response back to the supervisor agent that requires it to call a sub-agent that was pre-configured in the AI Agent Foundry. 4. The context is passed to the sub-agent which leverages the user prompt and system prompt of the supervisor agent, along with the system prompt of the sub-agent. 1. The response would include converting natural language requests into machine-readable (e.g. SQL) code. 5. The code is passed to an environment within Treasure Data that executes the code (e.g. a query), waits for it to complete (e.g. the query to run), and then returns the response (e.g. the query response) back to the service hosting the LLM as additional context. 6. The service hosting the LLM then crafts a response using the output of the code/query along with prior context. 7. The Service hosting the LLM provides a response back to sub-agent. 8. The sub-agent passes back the output to the supervisor agent. 9. The supervisor agent passes the response back to the service hosting the LLM to ensure that the user’s intent was fulfilled. 10. The service hosting the LLM then crafts a response with all the prior steps as context. 11. The agent passes the response back to the UI which is displayed to the user.