All posts
MongoDB AtlasMigrationmongomirrorLive Migration

Live Migration to MongoDB Atlas: Moving from Community and Enterprise Without Downtime

Polystreak Team2026-03-2710 min read

You're running MongoDB Community or Enterprise on self-managed servers — EC2, on-prem VMs, or containers. The operational overhead is real: patching, backup orchestration, replica set management, monitoring, security updates. Atlas eliminates all of that. The question isn't whether to migrate — it's how to do it without downtime, data loss, or a 3 AM maintenance window.

The migration itself should be invisible to your users. The database changes; the application keeps running. That's the bar.

The Three Migration Paths

MongoDB provides three official tools for live migration. Each handles different source topologies and offers different levels of control.

ToolSourceBest ForDowntime
Atlas Live Migration ServiceCommunity / Enterprise replica setSimplest path — Atlas manages the migration end-to-endSeconds (cutover only)
mongomirrorCommunity / Enterprise replica setMore control — you run the binary, monitor progress, choose cutover timingSeconds (cutover only)
Cluster-to-Cluster Sync (mongosync)Enterprise 6.0+ / AtlasSharded clusters, continuous sync, reversible migrationSeconds (cutover only)
mongodump / mongorestoreAny MongoDB versionSmall databases (<10GB), one-time bulk load, no live syncMinutes to hours (offline)

Atlas Live Migration Service

The Live Migration Service is built into the Atlas console. You provide the source cluster's connection details, Atlas validates connectivity, performs an initial sync of all data, then tails the source oplog to replicate ongoing changes in near real-time. When you're ready, you perform a cutover — point your application's connection string to Atlas and the migration is complete.

How It Works

  • Step 1: Create target Atlas cluster — Choose the tier (M10+), region, and cloud provider. Match the MongoDB version of your source or go one major version higher.
  • Step 2: Start migration in Atlas console — Go to your cluster > Migrate Data > I want to migrate from an existing MongoDB deployment. Enter the source hostname, port, and authentication credentials.
  • Step 3: Network connectivity — Atlas must reach your source. Options: VPC peering (recommended), public IP with firewall allow-listing of Atlas migration servers, or a VPN tunnel. Atlas provides the IP ranges to allow.
  • Step 4: Initial sync — Atlas copies all databases, collections, indexes, and users from source to target. Duration depends on data size and network bandwidth. A 100GB database over a 1Gbps link takes roughly 15-20 minutes.
  • Step 5: Oplog tailing — After initial sync, Atlas continuously reads the source's oplog and applies changes to the target. Your source keeps running normally. Writes on the source appear on Atlas within seconds.
  • Step 6: Cutover — When the oplog lag drops to near-zero and you're confident the target is caught up, update your application's connection string to the Atlas URI. Stop writes to the source. The migration is done.

The actual downtime is only the cutover window — the seconds between stopping writes to the source and your application connecting to Atlas. For most deployments, this is under 10 seconds.

Atlas Live Migration is a one-click solution for replica sets. You don't install anything. You don't run any binary. Atlas does the heavy lifting.

Limitations

  • Source must be a replica set — standalone instances need to be converted to a single-node replica set first (rs.initiate()).
  • Source MongoDB version must be 2.6 or later.
  • Atlas must have network access to the source — this can be complex in locked-down enterprise environments.
  • Does not migrate sharded clusters — use mongosync for that.
  • No built-in rollback — once you cut over, going back requires a separate reverse migration.

mongomirror: The Power-User Tool

mongomirror is a standalone binary provided by MongoDB that performs the same initial-sync-plus-oplog-tailing migration, but you run it yourself. This gives you more control: you choose when to start, you monitor the progress directly, you decide the exact cutover moment, and you can script the entire process into your CI/CD pipeline.

How It Works

  • Download mongomirror from the MongoDB Download Center (available for Linux, macOS, Windows).
  • Run with source and destination parameters: mongomirror --host <source-replica-set>/<host1:port,host2:port> --destination <atlas-connection-string> --ssl --username <user> --password <pass>
  • mongomirror performs an initial sync — copies all collections, indexes, and documents to the Atlas target.
  • After initial sync completes, mongomirror enters oplog tailing mode — continuously applying source changes to Atlas.
  • Monitor lag: mongomirror outputs the oplog timestamp it has processed vs the source's latest oplog entry. When lag is near-zero, you're ready to cut over.
  • Cutover: Stop your application's writes to the source, wait for mongomirror to report zero lag, then switch your connection string to Atlas. Terminate mongomirror.

mongomirror is ideal when you need to run the migration from a jump host inside your network, when you want granular logging, or when your security team requires that no external service (like the Atlas migration service) connects directly to your source.

Key Flags and Options

FlagPurpose
--hostSource replica set connection string (replica-set-name/host1:port,host2:port)
--destinationAtlas cluster connection string (from Atlas console)
--sslEnable TLS for the Atlas connection (required)
--dropDrop existing data on the target before syncing (use for clean re-runs)
--oplogPathPath to store oplog entries locally for resume capability
--numParallelCollectionsNumber of collections to sync in parallel (default 4, increase for many small collections)
--writeConcernWrite concern for the target (default majority)
--bookmarkFileFile to track migration progress for resume after interruption
mongomirror gives you the same zero-downtime migration as the Atlas console — but you control the binary, the network path, and the timing. Preferred by teams with strict change management processes.

Cluster-to-Cluster Sync (mongosync)

For sharded clusters or when you need continuous bidirectional sync, mongosync (Cluster-to-Cluster Sync) is the tool. It was introduced with MongoDB 6.0 and handles topologies that mongomirror and the Live Migration Service cannot — including sharded-to-sharded and Atlas-to-Atlas migrations.

  • Supports sharded cluster sources and destinations.
  • Bidirectional sync for phased migrations — run both clusters simultaneously while migrating traffic gradually.
  • Reversible — if issues arise after cutover, you can sync back to the source.
  • Requires MongoDB 6.0+ on both source and destination.
  • More complex setup than mongomirror — requires a dedicated mongosync process per shard.

Pre-Migration Checklist

Regardless of which tool you use, these preparation steps prevent the most common migration failures.

  • 1. Version compatibility — Atlas supports specific MongoDB versions. If your source is on 3.6, plan a version upgrade path (3.6 → 4.0 → 4.4 → 5.0 → 6.0 → 7.0) before or as part of the migration.
  • 2. Oplog size — The source oplog must be large enough to hold all changes during the initial sync. If initial sync takes 2 hours, the oplog must retain at least 2 hours of writes. Increase with replSetResizeOplog if needed.
  • 3. Index builds — Large index builds during migration slow both source and sync. Build indexes on the source before starting, or let Atlas build them after initial sync.
  • 4. Authentication — Create a dedicated migration user on the source with readAnyDatabase and clusterMonitor roles. Don't use root credentials.
  • 5. Network bandwidth — Estimate initial sync duration: data_size_GB / (bandwidth_Gbps × 0.1). A 500GB database over 1Gbps ≈ ~1 hour. Ensure the link can sustain this without impacting production traffic.
  • 6. Atlas cluster sizing — Provision the target Atlas cluster with enough storage for the data + indexes + 30% headroom. Choose a tier that matches your source's CPU/RAM capacity.
  • 7. Connection string update plan — Prepare the application's connection string swap. Use environment variables or a config service so the switch is a config change, not a code deploy.
  • 8. Test with a staging migration — Run the full migration against a staging Atlas cluster first. Validate data integrity, index presence, and application behavior before touching production.

The Cutover: Making It Seamless

The cutover is the critical moment. Here's the sequence that keeps downtime under 10 seconds.

  • 1. Verify oplog lag is near-zero — migration tool reports seconds of lag, not minutes.
  • 2. Enable application-level read-only mode — stop writes to the source. This can be a feature flag, a load balancer drain, or a connection pool pause.
  • 3. Wait for final oplog entries to replicate — 5-10 seconds for the last writes to sync to Atlas.
  • 4. Run validation — Compare document counts, collection stats, and a sample of documents between source and target.
  • 5. Switch connection string — Update your application config to point to the Atlas connection URI.
  • 6. Resume application writes — Disable read-only mode. Your application is now running on Atlas.
  • 7. Monitor — Watch Atlas metrics (connections, ops/sec, latency) for the first 30 minutes to confirm normal behavior.
  • 8. Keep the source running (read-only) for 24-48 hours as a rollback safety net.

Post-Migration: What to Do After Cutover

  • Enable Atlas features you couldn't use on self-managed: auto-scaling, continuous backup with point-in-time recovery, Atlas Search, Performance Advisor.
  • Configure Atlas alerts for connection count, oplog window, disk usage, and slow queries.
  • Review and optimize indexes using Atlas Performance Advisor — it analyzes your query patterns and suggests missing indexes.
  • Set up VPC peering between your application VPC and Atlas — eliminates public internet egress and reduces data transfer costs.
  • Decommission the source cluster after the safety window passes — stop paying for the self-managed infrastructure.
ToolComplexitySharded SupportRollbackBest For
Atlas Live MigrationLow (console-driven)NoManual reverse migrationTeams wanting the simplest path for replica sets
mongomirrorMedium (CLI binary)NoManual reverse migrationTeams needing control, scripting, or air-gapped networks
mongosyncHigh (per-shard process)YesBuilt-in reverse syncSharded clusters, phased rollouts, enterprise migrations
mongodump/restoreLowYes (per shard)Source untouchedSmall databases, one-time loads, version jumps
The migration tool matters less than the preparation. Oplog sizing, network bandwidth, and a rehearsed cutover plan are what make the difference between a smooth migration and a 2 AM incident.