Data migration is the process of moving data between the two systems. It is a key consideration for any system implementation, upgrade, or consolidation.
Organizations undergo data migrations for several reasons. They might need to adopt to enhance/create the digital experiences in the entire system, upgrade databases, establish a new data warehouse, or merge new data from an acquisition or other source. Data migration is also necessary when deploying another system that sits alongside existing applications.
Data migration is categorized as
● Storage migration
● Database migration
● Application migration
● Business process migration
Data migrations are seldom as pleasant as a spring walk in the park, but by following these best practices, your task should be easier.
Although migrations are usually divided into “extract, transfer, and load,” a better approach might be:
● Extract, Validate
● Transfer, Validate
● Load, Validate
In the context of the extract-transform-load (ETL) process, any data migration will involve at least the transform and load steps. This means that extracted data needs to go through a series of functions in preparation, after which it can be loaded into a target location.
ETL Approach defined as below
Java Batch Framework to execute the batch jobs for the migration.
Strategy for successful Data Migration
Less successful migrations can result in inaccurate data that contains redundancies and unknowns. Any issues that did exist in the source data can be amplified when it’s brought into a new, more sophisticated system.
A complete data migration strategy prevents a subpar experience that ends up creating more problems than it solves. Aside from missing deadlines and exceeding budgets, incomplete plans can cause migration projects to fail altogether. In planning and strategizing the work, teams need to give migrations their full attention, rather than making them subordinate to another project with a large scope.
A strategic data migration plan should include consideration of these critical factors:
● Mapping the data: – Before starting the ETL we need to map the attributes from the source to the target to validate the whole business functionality from the legacy to the target system. Unexpected issues can surface if this step is ignored
● Cleanup: while in the process of preparation/extraction any kind of data issues/ functional dependencies with your source data must be resolved
● Code Merge: The baseline code of the latest target version needs to be taken from any fresh installation of the target version/patch. Merge the customized policy opcodes & custom facility module codes to the baseline code to create the target deployment package.
● Maintenance and protection: Categorize the data which is required to be maintained into the target system and which is required to be maintained as an archive in the tapes/disks, define as much as cleaner data in the target system
● Governance: Tracking and reporting on data quality is important because it enables a better understanding of data integrity. The processes and tools used to produce this information should be highly usable and automate functions where possible.
In addition to a structured, step-by-step procedure, a data migration plan should include a process for bringing on the right tools for the project.
Data Migration approaches
There is more than one way to build a data migration strategy. Based on the organization’s specific business needs and requirements will help to establish the most appropriate way.
However, most strategies fall into one of two categories:
● Single-shot - big bang
In a big bang data migration, the full data transfer should be completed within a limited window of time. Because of this approach the Live systems experience downtime while data goes through ETL processing and transitions to the target database.
● Low Cost: that we can reduce the resource cost and infrastructure cost OPEX are lower than incremental rollout.
● Faster ROI
● Of course, that it all happens in one time-boxed event, requiring relatively little time to complete
● The pressure, though, can be intense, as the business operates with one of its resources offline
This risks compromised implementation.
Based upon the business needs this approach will be followed and planned multiple dress rehearsals to make sure the data migration is bounded to the time limits and finding out the data quality before the actual go-live event for the new systems.
The below diagram epics the Bing Bang typical migration approach
Incremental/Phased migrations, in contrast, complete the migration process in increments/phases. In this approach, the old system and the new are run in parallel, which eliminates downtime or operational interruptions. Processes running in real-time can keep data continuously migrating.
● There are no hard and fast deadlines for the new system live event. As both the existing and new systems runs parallel
● The Organization will have more time to adopt the new system and get used to it
The cost of the resources and infrastructure will be more and also the OPEX is also more as until the new system stability confirms, the organizations have to maintain both applications in the entire ecosystem.
Best Practices for Data Migration irrespective of the approach followed.
Regardless of which implementation method you follow, there are some best practices to keep in mind:
● Backup the data always
● Stick to the strategy and follow the plan, not to change the strategies in between the implementations
● Audit the data in the process of ETL, as there may be the chances of data loss/mismatch as both the legacy and the target are always not the same as we think
● Test, test, test: During the planning and design phases, and throughout implementation and maintenance, test the data migration to make sure you will eventually achieve the desired outcome
● Conduct multiple iterations of ETL in the implementation phase with the live data. and validate the live test with business functionality
● Define the process/approach immediate post-migration before switching on it to handle life