Cloud Migration: How to Go from an On-premises Database to Amazon Relational Database Service (RDS) with No Downtime

Vikas Kukreti

BY Vikas Kukreti

Technical Lead

Introduction:

When it comes to database migration to the cloud, clients/companies want minimal downtime and want the migration to happen during off-time.  This requires lots of planning and migration strategies. It requires analyzing the off-peak time of database usage and various architectural aspects. In this blog, I will try to explain how to migrate data from an on-premises database to Amazon’s Relational Database Service (RDS) with no downtime.

Let us try to answer two basic questions before learning how to migrate an on-premises database to RDS.

Why do we need to migrate?

Scale — Vertically or Horizontally, Simplicity, Automatic Management, Robust MySQL Support, Minimal Downtime, Automated Backup, and Security are the main reasons to migrate the on-premises database to RDS.

How do you Migrate?

In order to migrate data from an existing MySQL instance running on-premises or on in a data center, one has to configure it as a replication source. The On-Premises database remains in sync with the RDS. Once all testing is done, one can decommission or remove the old or On-Premises database.

Although RDS is quite robust and automatic, still it has some pros and cons:

Pros:

  • Unlike Physical / data-center machine one can scale up CPU, storage space and other stuff by using Amazon RDS.
  • It is very easy to create replication servers (read replicas) using RDS.
  • One can create a secondary instance for failover in the case primary instance fails and this will be a synchronous instance.
  • It is very easy to do Backups and software version patching, as most of the things are automated with RDS.

Cons:

  • One cannot have instance access or shell access of the database instance but we can only access the RDS service.
  • We don’t have full root / Superuser privilege on the RDS databases.

Overview:

prod-servername-oltp01 is a master on-premises database and prod-servername-oltp02(on-premises) is its replication slave.  It’s a good practice to have a replication slave. If you don’t have one, you have to create it so that you can take the dump from the slave itself.

Steps:

Take dump from the replication slave prod-servername-oltp02 using the following command. It’s better to use  –master-data and –single-transaction options because that provides a convenient way to make an online backup suitable for use prior to point-in-time recovery and give the bin-log position for creating a replication slave.

Creating a RDS database:

Create a DB identifier, user and password.

Select the RDS version and Instance type.

Select the VPC, Subnet and Security group

Set backup policy:

Set-up log policies and maintenance window:

Changes in the RDS parameter group:

For AWS RDS instances, you manage your database engine configuration through the parameters in a DB parameter group.

When a database instance is created a default DB parameter group is created. To create a parameter group, one has to click on the RDS instance and then select “Modify.” In the Modify screen, you can create a different parameter group.

The following parameter needs to be changed for RDS migration and creating a replication slave.

Taking dump from the source/master database:

One can consider splitting the mysqldump into two phases: –no-data so you dump schema only, and then –data-only so you get the data. This way one can tackle errors in schema and data differently. Below is the command to dump the schema and data together.

mysqldump -u$USER -p$PASS --databases <database-name> --routines --triggers --single-transaction --master-data=2  | gzip > $BACKUP_FILE 2>> $BACKUP_LOG

Here the variables $USER  $PASS, $BACKUP_FILE and $BACKUP_LOG are used for mysql username/password, location, and name for backup dump and log file respectively.

Issues with definers:

RDS is having a problem with Triggers and Views thus cannot be applied using the syntax which includes SQL DEFINER entries. The load will fail if the definer user is not changed to the current operating user. Linux sed can be used to replace the local definer users with the current operating user.

sed -i -e 's/`root`@`localhost`/`root`@`%`/g' -e 's/`abc`@`localhost`/`root`@`%`/g' db.sql

Copying / restoring the database to destination RDS instance:

scp ./db.sql  10.90.102.51:/backup/p-vkukreti

nohup mysql -u root -h prod-aws-server.cj3vbvmmfkzq.us-west-2.rds.amazonaws.com -p < db.sql  >nohup_rds_restore.out 2>&1

RDS support for SUPER user:

Amazon RDS does not support the SUPER privilege for regular users. The following privileges are the ones given to root user.

GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO ‘root’@’%’ IDENTIFIED BY ‘<password>’ WITH GRANT OPTION;

Login into RDS server:

Once data is migrated to RDS you can log in as follows:

[p-vkukreti@ba-us1-jump02 ~]$ mysql -u root -h prod-aws-server.cj3vbvmmfkzq.us-west-2.rds.amazonaws.com -p

Enter password:

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;

+--------------------+

Database

+--------------------+

information_schema

db_to_migrate

mysql

performance_schema

+--------------------+4 rows in set (0.03 sec)

Making RDS writer replication slave of On-Premises Master:

CALL mysql.rds_stop_replication;CALL mysql.rds_reset_external_master;

call mysql.rds_set_external_master('10.201.205.11',  3306, 'aws_slave', ‘<password>’, 'binlog.014241', 173996496,  0);

CALL mysql.rds_start_replication;

root:(none)> show slave status\G

*************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event

Master_Host: 172.19.0.254

Master_User: rdsrepladmin

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin-changelog.001800

Read_Master_Log_Pos: 2595669

Relay_Log_File: relaylog.000212

Relay_Log_Pos: 71787104

Relay_Master_Log_File: mysql-bin-changelog.001768

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Replicate_Do_DB:

Replicate_Ignore_DB:

Replicate_Do_Table:

Replicate_Ignore_Table

mysql.plugin,mysql.rds_monitor,mysql.rds_sysinfo,innodb_memcache.cache_policies,mysql.rds_history,innodb_memcache.config_options,mysql.rds_configuration,mysql.rds_replication_status

Replicate_Wild_Do_Table:

Skip Replication error in RDS:

One cannot skip a particular error number, so you must be careful while performing a dump and restore, otherwise replication slave will break. One can use Amazon RDS function of mysql.rds_skip_repl_error to skip error counter. This Stored Procedure takes no argument and skips one event at a time.

Conclusion:

By migrating our on-premises database to RDS, we can take advantage of a world-class database running on an Enterprise-class cloud platform. You can get the best performance and automated management and the flexibility to use all the features of the database. Now as we have replicated the on-premises, infrastructure to RDS and set up M/M replication, we are ready for a graceful switchover without downtime.

About The Author

Vikas Kukreti is working as a Technical Lead in Development Engineering at 3Pillar Global. Vikas has 9 years of database design, ETL, DWH, AWS, development and Data Science experience. He has database architecture experience in areas such as ETL, Data warehousing, Oracle Database, Hadoop, Hive, and Data Modeling. He has strong knowledge of Big Data, Data Science and Cloud Computing. Vikas is passionate about big data technology and Data Science, especially Hadoop and MongoDB. Vikas is a graduate of Uttar Pradesh Technical University (UPTU), India, and is a keen reader of classic novels and a movie freak.

One Response to “Cloud Migration: How to Go from an On-premises Database to Amazon Relational Database Service (RDS) with No Downtime”
  1. Deepak Pandey on

    It really worth to read your blog, Kindly write more such amazing stuff. Thanks for sharing such wonderful information.

    Reply
Leave a Reply