MySQL allows logging slow queries to either a log file or a table, with a configured query duration threshold. Parsing the slow log with tools such as EverSQL Query Optimizer will allow you to quickly locate the most common and slowest SQL queries in the database. You can optimize these queries automatically using EverSQL Query Optimizer.
We'll cover a few topics in this article:
- Enable slow query logs on MySQL (self-hosted instance)
- Enable slow query logs on PostgreSQL (self-hosted instance)
- Enable slow query logs on RDS MySQL/PostgreSQL (managed instance)
- How to generate a MySQL slow query file from a slow query log table
Logging MySQL slow queries to a file (on self-hosted MySQL instances)
To enable the slow query log for MySQL/MariaDB/Percona, navigate to the configuration file my.cnf (default path: /etc/mysql/my.cnf). Locate the configuration section [mysqld].
At the bottom of that section, add the following configuration values to log all queries with an execution duration of over 1 second.
slow_query_log=1 long_query_time=1 log_output=FILE slow_query_log_file=/var/lib/mysql/slow.log
Do not forget to restart the database to apply these configuration changes. For example, on Ubuntu, you can run the following command:
sudo service mysql restart
If you prefer not to restart your database at this point, look at the section for "instant configuration update" below.
A few notes:
- To modify the logged execution duration threshold, modify long_query_time.
- The default path for the new slow log file is: /var/lib/mysql/hostname-slow.log.
To configure the path, modify the configuration key slow_query_log_file to the new path (i.e, slow_query_log_file=/path/filename).
Logging MySQL slow queries to a file (instant configuration update)
To enable slow query logs without restarting the database, please apply the changes described above in the my.cnf file (to make sure changes are persisted after the next restart), and then apply these queries using your favorite SQL IDE.
In high-load systems, a low long_query_time configuration can cause excessive logging, so we recommend to start with higher thresholds and lower them over time.
- SET GLOBAL slow_query_log_file = '/var/lib/mysql/slow.log';
- SET GLOBAL long_query_time = 1;
- SET GLOBAL slow_query_log = 'ON';
- FLUSH LOGS;
Logging PostgreSQL slow queries to a file (on self-hosted PostgreSQL instances)
- Find location of postgresql.conf
Open terminal and run the following command to find the location of postgresql.conf file for your database.$ find / -name postgresql.conf /etc/postgresql/9.2/main/postgresql.conf Or use this command psql -U postgres -c 'SHOW config_file'
- Open the file postgresql.conf in your favorite text editor.
$ vi postgresql.conf
- Search for the line: #log_min_duration_statement = -1
Un-comment it and replace it with: log_min_duration_statement = 500
* this setting will log all queries executing for more than 500ms. Please adjust this threshold based on your definition for slow queries. - Inside the postgresql.conf, make sure that logging_collector is turned on
logging_collector = on
- Save the file and execute this command to reload the PostgreSQL configuration:
service postgresql reload - The log file will now be written to one of the following paths:
- /var/lib/pgsql/PG_VERSION/data/log/
- /var/log/postgresql/
- /var/lib/postgresql/PG_VERSION/main/pg_log
You can find the exact slow query log location by searching the postgresql.conf, for example log_directory = 'pg_log' indicates that PostgreSQL log file is located at /var/lib/pgsql/data/pg_log/
- Make sure the log_line_prefix configuration is set to default, and the prefix of lines looks similar to the following.
2023-10-16 14:30:00.123 UTC [12345] LOG: ... - Please make sure log_timezone is set to UTC, to make sure logs are written in standard UTC timezone.
Export MySQL or PostgreSQL slow queries on RDS Managed instances
- To enable slow queries logs on RDS managed instances, please follow the official AWS instructions:
- AWS RDS instructions for MySQL
or - AWS RDS instructions for PostgreSQL
or - If you're using Aurora serverless, navigate to your Aurora instance settings => Modify => enable the Log exports checkbox, to send your database logs to CloudWatch. For example, in Aurora Serverless PostgreSQL:
- AWS RDS instructions for MySQL
- Once you configured the new parameter group on your instance, please assign the following configuration values.
For MySQL:
You can modify long_query_time to the duration in seconds you consider as slow (e.g., configure it as 1 if you'd like to log any query that is slower than 1 second).slow_query_log=1 long_query_time=1 log_output=FILE
For PostgreSQL
You can modify log_min_duration_statement to the duration in milliseconds you consider as slow (e.g., configure it as 500 if you'd like to log any query that is slower than 500 milliseconds, which is 0.5 seconds).
log_min_duration_statement=500
Generate a file from the slow log table
Slow queries can be logged to either a file or a table. If you chose to log them to a table, you are probably wondering how you can use EverSQL to analyze those log entries and optimize those queries.
Using the following SQL query, you can retrieve the information from the mysql.slow_log table and automatically generate a slow query log file, in the same format that MySQL generates, and can be analyzed by EverSQL.
Please note: before running this command, make sure you have the mysql command line client installed on your machine.
Also, please set the host, the user name and the log file path in the command.
Once you got the log file, you can upload it to EverSQL Query Optimizer to visualize the log entries and choose the queries you'd like to optimize.
Summary
For any questions regarding database performance optimization or enabling slow query logs, please contact us at [email protected]