-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Support --checksum-data flag, on-the-fly checksum verification #867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
shlomi-noach
wants to merge
33
commits into
github:master
Choose a base branch
from
openark:rowcopy-checksum
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Updates from upstream
Using golang 1.14
Actions/Workflow: upload artifact
Support a complete ALTER TABLE statement in --alter
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
resubmission of openark#4 from downstream
This PR introduces
--checksum-data, an opt-in checksum verification that runs throughout the migration.With
--checksum-dataenabled, each rowcopy (a range of rows copied from the original table to the ghost table) is followed by a checksum on the two tables for that range.Checksums are executed concurrently to rowcopy and are the exception to the single thread model for
gh-ost.A checksum may well fail while the migration is running: since
gh-ostworks in async design, where binlog entries are applied at some point in time after they're generated, it's quite possible that ongoing traffic will make some checksums fail.A failed range's checksum is retried and retried until successful.
When
--checksum-datais enabled, cut-over does not complete if failed checksums are found. While tables are locked in preparation for cut-over, a grace period is given so that the checksum evaluation can run to completion.This is experimental.
Risk assessment: risky!
With flag disabled (as is the default case), behavior does not change and risk is low. With flag enabled, the following happen (or can happen):
More reads directly on master server: these are the checksum tests; they take place on both original table and ghost table. It's worth noting that the row-copy operation runs a full scan on the original table anyhow, and so the extra reads do not (should not) bring into memory data pages not already brought into memory by row-copy.
Slower migration time due to extra reads
Risk at time of cut-over. At this time I have no access to a busy production server so I have not verified. The following scenario is possible:
gh-ostbegins cut-over, thus locks table for writesTo clarify that I haven't seen this, but I predict this might show up in prod.
I'm presenting this PR upstream for visibility. It's an important change that further validates (or invalidates!) the correctness of migrated data so it may be of interest. I'd suggest massive experimentation.