What’s the problem?
Your MySQL database was working well, humming along nicely, but quite suddenly performance went south, especially during periods of heavy usage. Maybe you’re seeing deadlocks, or it’s just incredibly slow. If your MySQL database is using InnoDB there is a good chance you need to change your InnoDB storage. If you database is using MyISAM and you’re getting table locking issues, chances are it’s time to go to InnoDB, and you need to enable file-per-table before you convert the databases to InnoDB.
Why is this happening?
The concept of “tablespaces” is not usually used in MySQL, unlike many other RDMS. For MyISAM, each table is written to its own file. And in InnoDB, by default, all of the tables are written to one file defined by default at 10MB, with an auto-extend of 8 bytes. Each time it gets to the endpoint, MySQL has to copy the file to another location and add those 8 bytes, before deleting the old one. This really hits disk I/O, slows performance, and makes you pull your hair out. But there is a solution. What you can do is have each InnoDB table written to its own file.
There are a few things to be aware of when converting to InnoDB. The two major items that you need to address FIRST are REPLICATION and INDEXES. Replication causes complexities and multiple machines needing to be changes, and is NOT addressed in this article (e.g. don’t use this article if you are replicating). Indexes need to be examined in the MyISAM tables because full text and spatial indexes are not available in InnoDB. You’ll have to look at your schema to see if you are using these kinds of indexes. Also, the maximum length of an INDEX is different between the Engines. Check this – it can get you.
A good article on gotchas and work-arounds for these and other potential issues is:https://mysql.rjweb.org/doc.php/myisam2innodb
Additionally, consider adjusting key_buffer_size and innodb_buffer_pool_size when going to InnoDB.
Is file-per-table the solution for me?
To check if this is a solution for you, first check to see if you have file-per-table enabled, and see how many tables are using InnoDB:
SHOW VARIABLES LIKE ‘%innodb%file%’;
SELECT ENGINE, COUNT(*) FROM information_schema.tables WHERE table_schema NOT IN (‘mysql’,’information_schema’,’test’) GROUP BY 1;
If your values for innodb_data_file_path and innodb_file_per_table look similar to:
and the InnoDB engine counts are larger than a couple:
then you are definitely a candidate. Note that NULL engine-types indicate you may want to run a myisam_check.
If you have file per table enabled, but did this AFTER your InnoDB tables were created, then you still have the problem. Any table created prior to file per table being enabled is using the ibdata files.
What do I have to do?
So what you have to do is:
- Stop your application(s) from hitting the database(s)
- Back up all to-be-converted tables/databases
- Back up the GRANTS
- Back up your active system variables (in case you’ve made changes that you haven’t applied to your mysql configuration file)
- Stop mysql
- Back up your innodb data files (those files in innodb_data_file_path which are located in datadir)
- Back up the mysql configuration file to another location
- Start mysql
- Drop the tables/databases you are converting
- Stop mysql
- Change the configuration to file-per-table (adding in any non-default configuration items you had)
- Delete/move your old innodb data files (which you’ve already backed up)
- Start MySQL
- Load your data back in
- and Run your grants
Whew! Sounds like a lot, but it’s actually quite simple – albeit time consuming. You simply have to ensure you follow the steps and don’t skip anything.
If you only have a few key tables, you can do these steps for just those tables, and leave all those other storage engines alone (e.g. MyISAM, Archive, Memory, etc…).
Back up your innodb data. Backup InnoDB specific data along with any triggers, procs involved with the objects.
1. From the command line (DOS or Bash) (replace italizied words with your information):
- mysqldump –uusername -ppassword –no-create-info databasename>databasename_data.sql
- mysqldump –uusername -ppassword -d databasename >databasename_schema.sql
- mysqldump –uusername -ppassword -d databasename –routines –no-create-info –no-data –no-create-db –skip-opt>databasename_routines.sql (if necessary)
These are separate backups for schema and data.
2. Back up your grants (DOS or Bash):
- mysql –uusername -ppassword-ss –skip-column-names -e”SELECT CONCAT(‘SHOW GRANTS FOR ”, user ,”@”,host, ”;’) FROM mysql.user;” | mysql –uusername -ppassword-ss –skip-column-names > grants.sql
So now you have one grants.sql as a backup of your grants, and for each database, a databasename_schema.sql and a databasename_data.sql.
Whenever copying ibdata file it is recommended to copy redo logs to maintain consistency and to avoid issues during startup. Copy your ibdata files and redo logs (binary logs) and my.cnf (or my.ini for Windows):
To back up your mysql configuration, first, dump your current configuration (hot copy) – you never know if you’ve made some changes and forgotten to add to your my.cnf/my.ini:
- mysql –uusername -ppassword -e”SHOW VARIABLES;” > variables.txt
Then copy your my.cnf/my.ini to another location. In linux-type systems, this is usually located in /etc/bin, and in Windows C:Program FilesMySQLMySQL Server 5.Xbin. To find out for certain, look at the MySQL process:
ps –ef | grep mysqld
Find the service name:
C:>sc query type= service | find “MySQL”
Which gives you results like:
And find the parameters for it:
C:>sc qc MySQL55
Which gives you results like:
[SC] QueryServiceConfig SUCCESS
TYPE : 10 WIN32_OWN_PROCESS
START_TYPE : 2 AUTO_START
ERROR_CONTROL : 1 NORMAL
BINARY_PATH_NAME : “C:Program FilesMySQLMySQL Server 5.5binmysqld” –defaults-file=”C:ProgramDataMySQLMySQL Server 5.5my.ini” MySQL55
TAG : 0
DISPLAY_NAME : MySQL55
SERVICE_START_NAME : LocalSystem
Copy the my.ini or my.cnf to a safe location.
To copy your ibdata files (for safe keeping – in case you mess things up), first you need to stop mysql:
mysqladmin –port=3316 -u -p shutdown
or in Windows, stop the service.
When you dumped your variables (variables.txt), the location of the ibdata files is in datadir. So copy those to a new location (e.g. ibdata1, ib_logfile1 and ib_logfile0).
Start mysql back up.
In Windows, start the service.
cd /usr/local/mysql/bin(or where your mysql is installed)
Drop the databases (or tables) in question.
Make sure you don’t leave any InnoDB tables at dblevel before moving ib* files. Leaving any InnoDB objects would lead to corruption.
Verify if any InnoDB tables are pending:
Edit the my.cnf/my.ini:
Delete the ibdata files.
Load your databases and then your grants:
- mysql –uusername -ppassword < databasename_ schema.sql
- mysql –uusername -ppassword < databasename_data.sql
- mysql –uusername -ppassword < grants.sql
Test your database and start your application.
MyISAM to InnoDB Conversion
If you’re using MyISAM and want to go to InnoDB, then your steps are much simpler:
- Stop anyone (applications/scripts/people) from accessing the databases.
- Edit you’re my.cnf/my.ini (see above steps to see how to find out where it is) to use innodb_file_per_table=1, and if desired, set the default engine to InnoDB (default-storage-engine=InnoDB) and adjust key_buffer_size and innodb_buffer_pool_size if desired, and restart mysql.
- You have two choices: dump and load (like above) editing the schema dump to use storage engine InnoDB (search/replace MyISAM to InnoDB on those tables you’re converting) – or ALTER the tables to InnoDB. The alters can take some time, but you can do a few at a time, starting with the most troublesome tables.
To alter ALL the MyISAM tables, you can use the system tables to dump a script:
And run the alters:
- mysql –uusername -ppassword < innodb_alters.sql
Subscribe to Our Blog
Never miss a post! Stay up to date with the latest database, application and analytics tips and news. Delivered in a handy bi-weekly update straight to your inbox. You can unsubscribe at any time.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine over one hundred logins in the source server, you need to migrate them to the destination server. Wouldn’t it be awesome if we could automate the process?