Treasure Data provides td-agent, to collect server-side logs and events, and to seamlessly import the data from PHP applications, through Treasure Agent.
Continue to the following topics:
Basic knowledge of PHP.
Basic knowledge of Treasure Data, including the TD Toolbelt.
PHP 5.3 or higher (for local testing).
The fluent-logger-php library does not work in Heroku (here's why).
td-agenton your application servers. td-agent sits within your application servers, focusing on uploading application logs to the cloud.
The fluent-logger-php library enables PHP applications to post records to their local td-agent. td-agent, in turn, uploads the data to the cloud every 5 minutes. Because the daemon runs on a local node, the logging latency is negligible.
td-agent Install Options
td-agent, run one of the following commands based on your environment. The agent program is installed automatically when you use the package management software for each platform like rpm/deb/dmg.
Ubuntu and Debian
Legacy support for EOL versions is still available
You can choose Amazon Linux 1 or Amazon Linux 2. Refer to Installing td-agent on Amazon Linux.
MacOS X 10.11+
MacOS X 10.11.1 (El Capitan) introduces some security changes. After the td-agent is installed, edit the /Library/LaunchDaemons/td-agent.plist file to change /usr/sbin/td-agent to /opt/td-agent/usr/sbin/td-agent.
Windows Server 2012+
The Windows installation requires the steps detailed in:
Opscode Chef Repository
AWS Elastic Beanstalk is also supported. Windows is currently NOT supported.
Next, specify your API key by setting the
apikey option. You can view your API key from your profile in the TD Console.
YOUR_API_KEY should be your actual apikey string. Using a [write-only API key](access-control#rest-apis-access) is recommended.
Restart your agent when the following lines are in place.
td-agent now accepts data via port 24224, buffers the data (var/log/td-agent/buffer/td), and automatically uploads the data into the cloud.
To use fluent-logger-php, use Composer as a package manager. First, create
composer.json in your directory with the following content.
Composer and install necessary libraries.
Next, initialize and post the records as follows.
Confirming Data Import
Execute the preceding program.
Sending a SIGUSR1 signal flushes td-agent’s buffer. The upload starts immediately.
To confirm that your data has been uploaded successfully, issue the td tables command as follows:
The first argument of post() determines the database name and table name. If you specify `td.test_db.test_table`, the data is imported into the table *test_table* within the database *test_db*. They are automatically created at upload time.
Tips on Production Deployment
Use Apache and mod_php
We recommend that you use Apache and mod_php. Other setups have not been fully validated.
Use Apache prefork MPM
Use Apache prefork MPM. Other MPMs such as worker MPM should not be used. You can confirm your current settings with the apachectl -V command.
We recommend that you periodically restart your PHP processes by setting MaxRequestsPerChild in your Apache conf.
Do not set MaxRequestsPerChild to zero.
High-Availability Configurations of td-agent
For high-traffic websites (more than 5 application nodes), use a high availability configuration of td-agent to improve data transfer reliability and query performance.
Monitoring td-agent itself is also important. Refer to this document for general monitoring methods for td-agent:
td-agent is fully open-sourced under the Fluentd project.
“Resource temporarily unavailable” warning message appears in my PHP application
This problem occurs when you have either a relatively high volume, or an old Linux kernel version. You must tune up the Linux kernel a little bit.
Increase Max # of File Descriptors
First, increase the max number of file descriptors per process. When you type
ulimit -n command and the result shows
1024, complete the follow instructions to increase to 65535:
Optimize Kernel Parameters
Add the following parameters to your
/etc/sysctl.conf file. Either type
sysctl -w or reboot your node to have the changes take effect. You need a root permission.
We offer a schema mechanism that is more flexible than that of traditional RDBMSs. For queries, we leverage the Hive and Presto Query Languages.