Drop files here

SQL upload ( 0 ) x -

Press Ctrl+Enter to execute query Press Enter to execute query
ascending
descending
Order:
Debug SQL
Count
Execution order
Time taken
Order by:
Group queries
Ungroup queries
Collapse Expand Show trace Hide trace Count: Time taken:
Bookmarks
Refresh
Add
No bookmarks
Add bookmark
Options
Restore default values
Collapse Expand Requery Edit Explain Profiling Bookmark Query failed Database: Queried time:

Advisor system

Possible performance issues

Issue:
Uptime is less than 1 day, performance tuning may not be accurate.
Recommendation:
To have more accurate averages it is recommended to let the server run for longer than a day before running this analyzer
Justification:
The uptime is only 0 days, 2 hours, 51 minutes and 20 seconds
Used variable / formula:
Uptime
Test:
value < 86400
Issue:
long_query_time is set to 10 seconds or more, thus only slow queries that take above 10 seconds are logged.
Recommendation:
It is suggested to set long_query_time to a lower value, depending on your environment. Usually a value of 1-5 seconds is suggested.
Justification:
long_query_time is currently set to 10s.
Used variable / formula:
long_query_time
Test:
value >= 10
Issue:
The slow query log is disabled.
Recommendation:
Enable slow query logging by setting slow_query_log to 'ON'. This will help troubleshooting badly performing queries.
Justification:
slow_query_log is set to 'OFF'
Used variable / formula:
slow_query_log
Test:
value == 'OFF'
Issue:
There are lots of rows being sorted.
Recommendation:
While there is nothing wrong with a high amount of row sorting, you might want to make sure that the queries which require a lot of sorting use indexed columns in the ORDER BY clause, as this will result in much faster sorting.
Justification:
Sorted rows average: 4.76 per second
Used variable / formula:
Sort_rows / Uptime
Test:
value * 60 >= 1
Issue:
There are too many joins without indexes.
Recommendation:
This means that joins are doing full table scans. Adding indexes for the columns being used in the join conditions will greatly speed up table joins.
Justification:
Table joins average: 4.8 per second, this value should be less than 1 per hour
Used variable / formula:
(Select_range_check + Select_scan + Select_full_join) / Uptime
Test:
value * 60 * 60 > 1
Issue:
The rate of reading the first index entry is high.
Recommendation:
This usually indicates frequent full index scans. Full index scans are faster than table scans but require lots of CPU cycles in big tables, if those tables that have or had high volumes of UPDATEs and DELETEs, running 'OPTIMIZE TABLE' might reduce the amount of and/or speed up full index scans. Other than that full index scans can only be reduced by rewriting queries.
Justification:
Index scans average: 11.75 per minute, this value should be less than 1 per hour
Used variable / formula:
Handler_read_first / Uptime
Test:
value * 60 * 60 > 1
Issue:
The rate of reading data from a fixed position is high.
Recommendation:
This indicates that many queries need to sort results and/or do a full table scan, including join queries that do not use indexes. Add indexes where applicable.
Justification:
Rate of reading fixed position average: 4.72 per second, this value should be less than 1 per hour
Used variable / formula:
Handler_read_rnd / Uptime
Test:
value * 60 * 60 > 1
Issue:
The rate of reading the next table row is high.
Recommendation:
This indicates that many queries are doing full table scans. Add indexes where applicable.
Justification:
Rate of reading next table row: 153.89 per second, this value should be less than 1 per hour
Used variable / formula:
Handler_read_rnd_next / Uptime
Test:
value * 60 * 60 > 1
Issue:
Many temporary tables are being written to disk instead of being kept in memory.
Recommendation:
Increasing max_heap_table_size and tmp_table_size might help. However some temporary tables are always being written to disk, independent of the value of these variables. To eliminate these you will have to rewrite your queries to avoid those conditions (Within a temporary table: Presence of a BLOB or TEXT column or presence of a column bigger than 512 bytes) as mentioned in the MySQL Documentation
Justification:
Rate of temporary tables being written to disk: 23.53 per minute, this value should be less than 1 per hour
Used variable / formula:
Created_tmp_disk_tables / Uptime
Test:
value * 60 * 60 > 1
Issue:
MyISAM key buffer (index cache) % used is low.
Recommendation:
You may need to decrease the size of key_buffer_size, re-examine your tables to see if indexes have been removed, or examine queries and expectations about what indexes are being used.
Justification:
max % MyISAM key buffer ever used: 0%, this value should be above 95%
Used variable / formula:
Key_blocks_used * key_cache_block_size / key_buffer_size * 100
Test:
value < 95
Issue:
The rate of opening tables is high.
Recommendation:
Opening tables requires disk I/O which is costly. Increasing table_open_cache might avoid this.
Justification:
Opened table rate: 1.36 per minute, this value should be less than 10 per hour
Used variable / formula:
Opened_tables / Uptime
Test:
value*60*60 > 10
Issue:
The rate of opening files is high.
Recommendation:
Consider increasing open_files_limit, and check the error log when restarting after changing open_files_limit.
Justification:
Opened files rate: 21.01 per hour, this value should be less than 5 per hour
Used variable / formula:
Open_files / Uptime
Test:
value * 60 * 60 > 5
Issue:
Less than 80% of the query cache is being utilised.
Recommendation:
This might be caused by query_cache_limit being too low. Flushing the query cache might help as well.
Justification:
The current ratio of free query cache memory to total query cache size is 9%. It should be above 80%
Used variable / formula:
100 - Qcache_free_memory / query_cache_size * 100
Test:
value < 80
Issue:
The max size of the result set in the query cache is the default of 1 MiB.
Recommendation:
Changing query_cache_limit (usually by increasing) may increase efficiency. This variable determines the maximum size a query result may have to be inserted into the query cache. If there are many query results above 1 MiB that are well cacheable (many reads, little writes) then increasing query_cache_limit will increase efficiency. Whereas in the case of many query results being above 1 MiB that are not very well cacheable (often invalidated due to table updates) increasing query_cache_limit might reduce efficiency.
Justification:
query_cache_limit is set to 1 MiB
Used variable / formula:
query_cache_limit
Test:
value == 1024*1024