# Managing the Machine Learning Pipeline After ingesting your data into Treasure Data, you can build a predictive model using Treasure Data queries, and workflows. The typical machine learning pipeline for supervised learning is: 1. data preparation 2. building a model 3. evaluating the model 4. predicting unseen data with trained model You can use Treasure Data Workflows to manage your supervised learning process. Treasure Data (TD) provides AutoML as a feature which can be configured within the familiar [Treasure Workflow](https://docs.treasuredata.com/display/PD/Treasure+Workflow) environment. Learn more about [AutoML](/products/customer-data-platform/machine-learning/automl) ![](/assets/105578848.2775c3c343559d67e1778594d2fc1c2403686c7398feafe43ccbdbe8a0ad1bce.3cb60505.png) By using Digdag Treasure Data operators within your TD Workflow, you can automate your machine learning from data preparation to prediction. Digdag Treasure Data operators include: * [td>: Treasure Data queries](https://docs.digdag.io/operators/td.md) * [td_run>: Treasure Data saved queries](https://docs.digdag.io/operators/td_run.md) * [td_ddl>: Treasure Data operations](https://docs.digdag.io/operators/td_ddl.md) * [td_load>: Treasure Data bulk loading](https://docs.digdag.io/operators/td_load.md) * [td_for_each>: Repeat using Treasure Data queries](https://docs.digdag.io/operators/td_for_each.md) * [td_wait>: Waits for data arriving at Treasure Data table](https://docs.digdag.io/operators/td_wait.md) * [td_wait_table>: Waits for data arriving at Treasure Data table](https://docs.digdag.io/operators/td_wait_table.md) * [td_partial_delete>: Delete range of Treasure Data table](https://docs.digdag.io/operators/td_partial_delete.md) * [td_table_export>: Treasure Data table export to S3](https://docs.digdag.io/operators/td_table_export.md) Digdag can run tasks in parallel, so you can simultaneously run independent tasks such as parameter tuning. Treasure Data Workflows enable you to make prediction tasks a periodic part of your product offerings. Having a stable way to run and evolve your machine learning processes in batches on an hourly or daily basis is a good way to evolve them and derive a better predictive model. ![](/assets/105578847.ce353f50a12e39ebe398e8039749d93187b5c9e549cbd0c65abfa050a1499f71.3cb60505.png)