AI migration plan: let AI draft, let humans decide
An AI migration plan can save time, but teams still need human review, rollback checks, staging tests, and sign-off before any live data move.

Why blind AI migrations break things
An AI migration plan can look clean, logical, and complete while missing the one rule that keeps your data usable. The model sees table names, field types, and sample rows. It does not see the quiet business rules your team follows every day, like which customers should never merge, which invoices must stay frozen after export, or why an "inactive" account still needs billing history.
That gap causes real damage fast. If AI maps one field the wrong way, the mistake does not stay in one row. It spreads across every imported record, every downstream report, and every tool that trusts the new system. A bad match between "billing contact" and "account owner" sounds small until sales emails, support tickets, and payment reminders all go to the wrong person.
Database migrations get dangerous when the write cannot be undone. A script that overwrites IDs, trims old notes, or collapses duplicate records can destroy context in minutes. Teams often think they can fix it later, then learn the source data has already changed, the logs are incomplete, or the new system has started syncing the bad data into other apps.
The worst part is how convincing AI output can sound. A model will often explain its choices with calm, tidy language, even when those choices rest on a bad assumption. That tone can push people into skipping review because the plan feels thought through. Confidence is not proof.
A simple customer record move shows the risk. Say the old system stores one person with three email addresses and years of order notes. The new system allows one email and shorter text fields. AI might suggest taking the newest email and trimming the notes to fit. That solves the format problem, but it can erase the address finance still uses and cut off details your support team needs during disputes.
Human review slows this down a little. That is the point. A short pause before any irreversible data move is much cheaper than cleaning up bad records after customers, finance, and support all start seeing different versions of the truth.
Draw the line before planning starts
Before you ask for an AI migration plan, decide what the tool may draft and what it may never decide. AI can turn notes into a runbook, suggest step order, split work into small batches, draft test queries, and prepare dry-run checklists. It can also point out places where rollback checks belong. It should not choose any step that can erase, merge, overwrite, or expose live data.
Write those limits down in plain language. If a task is reversible and easy to test in staging, AI may draft it. If a task changes production data, access rules, billing records, customer IDs, or audit logs, a person must approve it before anyone runs it. The same rule applies to scripts that seem harmless but write back to the database.
A short approval table keeps people honest:
- AI may draft runbooks, sequencing, validation queries, and dry-run notes
- Engineers review every command before execution
- A named approver signs off on schema drops, backfills, merges, and deletes
- Product or operations signs off when customer records, billing, or compliance data may change
- Anyone on the rollout team can stop the run if results look wrong
For early runs, block live writes to the part of the system you are testing, or use a read-only window. That feels strict, but it cuts noise. You want to see whether the plan works on stable data, not while new records keep arriving and hiding mistakes.
Name people, not roles alone. "CTO" is too vague if three people assume someone else owns the call. On a small team, that may be a founder and a senior engineer. If the company uses a fractional CTO, that person can approve the technical side while an operations owner approves business risk.
Teams often skip this step because it feels slow. It is still faster than cleaning up one bad irreversible write.
Collect the facts before you ask AI to help
An AI migration plan only works when the input is real and complete. If you feed it half-remembered details, it can produce a neat plan that breaks production on day one.
Start with the current schema. Export tables, columns, types, indexes, constraints, and any field notes your team already uses. A column called status might mean payment state in one app and account state in another. AI cannot guess that safely.
You also need a clear note on the source and target systems. Write down what is moving, where it lives now, where it will go, and what versions or formats each side uses. Keep it simple and concrete so the draft does not mix old rules with new ones.
Before you ask for any step order, collect these facts in one place:
- the latest schema export and short notes for confusing fields
- the source system and target system, with data formats and limits
- tables that hold money, legal records, or customer data
- allowed downtime, backup window, and restore time
- sample records that include edge cases
That last point saves teams all the time. Do not hand AI only clean sample rows. Include blanks, old records, duplicate emails, canceled orders, refunds, merged accounts, and names or addresses with unusual characters. Those records expose the messy parts early.
Sensitive tables need extra care. If a table affects invoices, contracts, taxes, or customer identity, mark it clearly. That tells the model, and your reviewers, where human review for data changes must be strict and where extra rollback checks will matter later.
Downtime limits matter too. A plan for a 5 minute maintenance window looks very different from a weekend cutover. Backup timing matters for the same reason. If a full restore takes 3 hours, the draft must avoid risky moves that assume a quick undo.
This prep work becomes the base of your database migration checklist. It also makes staging data validation much more useful, because the test data will reflect real problems instead of a cleaned-up fantasy. When humans review the draft, they can judge the steps against facts, not hope.
Draft the migration in small steps
Large migrations go wrong when the plan reads like one big command. Ask AI to break the job into the smallest useful moves instead. One schema change, one data copy, one index build, one app switch.
That makes the plan easier to read, easier to test, and easier to stop. A good AI migration plan should feel almost boring. If one step fails, you want to know exactly where to pause, what changed, and what to undo.
Each step needs five parts:
- the exact action
- the check that proves it worked
- the stop point before any live write
- the rollback action if the check fails
- the person who signs off before the next step
The check matters as much as the action. If AI says, "copy users to the new table," it should also say how the team will confirm row counts, null rates, duplicate IDs, and app behavior before moving on. Without that, the plan is just a guess with nice formatting.
Put a hard stop before every irreversible move. That includes deleting old data, rewriting primary fields, merging records, or changing how the app writes new entries. AI often proposes a clean sequence, but humans need to mark the points where production data can change in ways you cannot easily undo.
Rollback steps need real commands, not vague notes. "Revert if needed" is useless at 2 a.m. A better step says who restores the snapshot, which script switches writes back, how the team confirms the old path still works, and how long that recovery usually takes.
Run the whole order in staging before production. Use data that looks close to real data, even if you mask sensitive fields. Time every step. Watch for lock time, slow queries, failed retries, and mismatched totals. If the rollback path fails in staging, the production plan is not ready.
A database migration checklist works best when each step can stand on its own. If someone joins the call halfway through, they should still see the current state, the next action, and the exit path in one screen.
Put rollback checks on every risky move
Every migration step that changes live data needs a clear way back. "We have backups" is not enough. A backup only helps if someone restored it recently and knows how long that restore takes.
Before production, do one real restore test in a safe environment. Restore the latest backup, start the app, and check that the data is usable. If restore takes two hours and your outage window is 20 minutes, the team needs a different plan.
Small checks beat one big guess at the end. After each step, count rows before and after the change. Then compare facts that should stay stable: totals, record IDs, and timestamps such as created_at and updated_at. If 250,000 customer records go in, 250,000 should come out, with the same IDs unless the plan says otherwise.
These checks belong in any database migration checklist, even when AI wrote the first draft.
Keep both versions for a short time
For risky moves, keep the old and new data side by side for a brief period instead of deleting the source right away. A read-only copy is often enough. That extra time helps the team spot bad mappings, missing records, or strange timestamp shifts before the old data is gone.
A clear stop rule also matters. Teams make bad calls when they feel pressure to finish. Write down the exact events that force a rollback:
- row counts differ from the expected number
- totals do not match within the agreed limit
- IDs are missing, duplicated, or changed by mistake
- timestamps shift in ways the plan did not allow
- restore time is longer than the team approved
Name one person who can stop the rollout, and make that choice easy. If one check fails, pause the next step, roll back, and inspect the data before anyone tries a quick fix in production.
A good AI migration plan does not just list steps. It tells people when to stop, what to compare, and how to get back to a known good state.
A simple example with customer records
Say a company wants to move customer data from an old CRM to a new one before the next billing cycle. The old database stores first_name and last_name in separate fields. The new one expects a single full_name field. It also uses a different status model, so old values like trial, active, and suspended need new labels.
This is where an AI migration plan helps, if you keep it on a short leash. The tool can draft the order of work, suggest field mappings, and list rollback checks. People still need to approve every step that can damage data.
A sensible draft usually looks like this:
- copy customer records into a staging table
- build
full_namefrom the two old name fields - map old status values to the new set
- flag records that do not map cleanly
- pause deletes until staff confirm the matches
The name merge sounds easy, but messy records show why human review matters. One customer may have no last name. Another may have a company name in first_name and nothing in last_name. A third may include extra spaces or titles like "Dr". AI can spot patterns, but someone should still open a small sample and check the odd cases.
Ten records is a good start, as long as they are not ten clean records picked at random. Review the awkward ones on purpose: blank fields, duplicate accounts, unusual status history, and records with notes from sales or support. If those pass, your mapping is probably sound. If they fail, fix the rules before you touch production.
Status changes need the same care. Maybe active becomes current, trial becomes pending, and suspended becomes on_hold. That mapping may look fine in a draft, but a human needs to ask one simple question: does each new status keep the same business meaning? If billing, support, or reporting treats that status differently, a bad map causes real problems fast.
The safest move is to delay hard deletes. Keep old rows available until the team verifies that customer counts match, spot checks look right, and reports still total up the way they should. Only then should someone approve the final cutover. That approval should come after the checks pass, not before.
Mistakes teams make with AI written plans
AI can draft a migration plan that looks tidy and complete. That is exactly why teams trust it too fast. A clean document can hide bad guesses, missing edge cases, and rollback steps that fall apart the moment real data fights back.
The first mistake is accepting guessed field mappings. If the old system has "name" and the new one has "display_name," an AI migration plan may map them without asking who uses each field and how. That guess gets worse with dates, status values, country codes, tax IDs, and free-text notes. One wrong mapping can spread bad data across thousands of records in minutes.
Dirty data causes the next failure. Teams test a plan on a small, clean sample and feel safe because everything works. Production data is rarely clean. Some rows are empty, some repeat, some break the format, and some hold values that nobody expected. If you skip staging data validation with real exports, AI will write steps for an ideal database, not your actual one.
A few warning signs show up again and again:
- The plan assumes every record has the same required fields.
- Duplicate users or customers get merged by a weak rule.
- Rollback says "restore backup" but nobody timed it or tested it.
- One release changes schema, backfills data, renames fields, and updates app logic at once.
Rollback steps often look fine on paper and fail in practice. Teams write them because the checklist says they should, not because they rehearsed them. If a migration deletes rows, rewrites IDs, or transforms financial history, the rollback needs a real drill. You need to know how long restore takes, what data you lose between backup and failure, and who makes the stop call.
Trying to change too much in one release is the mistake that ties the rest together. Big releases hide the root cause when something breaks. Smaller releases cost a bit more planning time, but they make human review for data changes much easier. You can validate one mapping, one transform, and one rollback check at a time. That is slower on day one and much faster than cleaning up after a bad migration.
Quick checks before production
The last hour before a migration decides whether you have a calm release or a messy rollback. If even one safety check still depends on hope, wait.
A good AI migration plan can suggest the order of steps, but production needs human proof. Before anyone starts the run, the team should confirm five things with evidence, not memory.
- The backup is complete, recent, and stored where the team can reach it fast. One person should restore it to a clean environment and prove the app can read the recovered data.
- The staging run used data that looks like production, not a neat sample. Old rows, missing values, duplicate records, odd text, and oversized fields often cause the real trouble.
- Every step that deletes, merges, or overwrites data has owner approval. Product, operations, and anyone responsible for the records should say yes before the change window opens.
- Monitoring is live before the first script runs. Logs, error alerts, database metrics, and queue depth should all show up where the team can see them in real time.
- Support knows what users may notice. If names, statuses, timestamps, or account history might look different for a few hours, support needs a short brief and a clear escalation path.
One detail gets skipped all the time: testing the restore. Teams feel safe when they see a backup job marked "success," but that only proves the file exists. It does not prove the backup is usable, complete, or fast enough to recover under pressure.
Staging needs the same honesty. If production has ten years of customer records with messy imports and manual edits, staging should reflect that shape. Clean test data can hide problems until the worst moment.
Approval matters most on irreversible moves. If the script will collapse duplicate customer profiles into one record, the owner of that data should confirm the matching rules, edge cases, and what happens to notes, tags, and audit history.
When all five checks pass, the release team can focus on execution instead of guessing. That alone cuts a lot of bad midnight decisions.
What to do next
An AI migration plan is only useful when it becomes a runbook that real people can follow under pressure. Clean up the draft, remove guesswork, and turn every step into something specific: who runs it, what they run, what result they expect, and what makes them stop.
A good runbook is boring in the best way. It should list the exact order, the checks before each risky change, the rollback action for each irreversible move, and the person who approves the next step. If a teammate reads it and still has to ask, "What do I do now?", the draft needs more work.
Before production, schedule a dry run in staging or on a recent copy of production data. Time every step. Small surprises matter here. A data copy that takes 4 minutes in theory may take 19 in practice, and that changes your maintenance window, your communication plan, and your rollback decision.
Keep the final prep simple:
- Review the runbook line by line with the people who will execute it.
- Test backup restore, not just backup creation.
- Record the expected timing for each step and the maximum delay you will accept.
- Name one person who can say "stop" or "continue" during the migration.
That last point saves teams from chaos. Five smart people debating in a live migration is slower than one clear owner listening to input and making the call. Pick that person before the work starts, not in the middle of a failed step.
If you want a second set of eyes, Oleg Sotnikov can review migration runbooks as a Fractional CTO. That can help when the plan includes tricky rollback logic, production risk, or AI-assisted delivery work that still needs strong human review.
Do not aim for a perfect document. Aim for a runbook that a tired team can execute at 2 a.m. without guessing. If any irreversible data step still feels vague, pause and rewrite it before production.