Realm Legacy Migration Guide

Legacy Realm, both self-hosted Realm Object Server and the Realm Cloud, is currently End of Life and will be shut down on November 1st 2021

Introduction

This document is intended to help customers of the legacy Realm Cloud as well as users of legacy Realm Object Server deployed on-premise to migrate to the new MongoDB Realm platform.

The important thing to note here is that the new version of Realm Sync, version 10 of the client SDKs, is incompatible with the server side sync solution of legacy Realm Sync - Version 10 will no longer be able to sync with legacy servers. Additionally, upgrading legacy clients to version 10 will not convert the old data stored on the client device to the new sync history format. This is because there are new sync instructions introduced, as well as many old instructions removed, and there is no conversion function built into the upgrade process. Query-based sync is also not supported in version 10 of the client SDKs. There are many things that change with Realm Sync in version 10, both at the code and architecture level, such as authentication, permissions, partitionKey vs realmPath, and event based processing applications such as the Global Notifier and Adapter (which run on the legacy realm client SDKs pre-version 10).

While the above paragraph may seem daunting, it is shared in the vein of full transparency, and the below document will walk through technical examples of how to migrate. Please take solace in the fact that we are here to help. Do not hesitate to reach out and request guidance - we are standing by.

Because MongoDB Realm only supports the full-sync architecture of legacy Realm, we have only included instructions for the full-sync Realm migration. We are here to help if you have questions on how to migrate legacy query-based Sync.

Migrating an existing legacy Realm Sync mobile app, at a high-level, touches each one of the below topics:

  • Setup a MongoDB Realm Instance

  • Migrate Schema

  • Implement the Realm App Interface

  • Use the new login method for authentication

  • Open a Realm with a partition key value

  • Migrating Permissions

  • Transfer of Event processing APIs (Global Notifier/Adapter)

  • Legacy Query-Based Sync

  • Migrating Data

  • Rollout

The Realm SDKs used for MongoDB Realm cloud all use version 10 of the library. It may help to familiarize yourself with the platform by running through a tutorial of the new sync before diving straight into migrating your app. You can see the tutorials here -

​

Setup a MongoDB Realm Instance

The first thing you will need to do is spin up an Atlas cluster, with MongoDB 4.4, and then attach a MongoDB Realm App to it. You can do this for free with generous free tier that MongoDB offers. You can see instructions on how to do this here -

​

Migrate Schema

In MongoDB Realm, you need to set up a Sync Schema - any documents (MongoDB data structures) that do not conform to the Sync Schema will be rejected, additionally, any Realm Objects that do not conform to the Sync Schema will be rejected. Generally, Realm Objects to MongoDB documents have a one to one mapping. See here:

Fortunately, MongoDB Realm has an easy way to inject the server side sync schema from the client, it's called Development Mode -

With Development mode you will be able point a development app at MongoDB Realm and instantiate your schema. However, there are a couple things you will need to change in your schema before pointing to MongoDB Realm in order to inject the sync schema on the server side.

Primary Keys

The first thing you will need to do is convert your primary keys on your Realm Objects from your previous Primary Key field name to the required primary key of all MongoDB documents; which is _id

Also, your new _id must be either an ObjectId, a String, or an Int.

For instance, if your legacy Realm Object looks like this -

Swift
Java
Javscript
.NET
Swift
class Project: Object {
@objc dynamic var projectId: String = UUID().uuidString
@objc dynamic var body: String = ""
@objc dynamic var isDone: Bool = false
@objc dynamic var timestamp: Date = Date()
override static func primaryKey() -> String? {
return "projectId"
}
}
Java
public class Project extends RealmObject {
@PrimaryKey
@Required
private String projectId = "";
​
@Required
private String body;
​
@Required
private Boolean isDone;
​
@Required
private Date timestamp;
// getters and setters
}
Javscript
const ProjectSchema = {
name: 'Project',
primaryKey: 'projectId',
properties: {
projectId: 'string',
body: 'string',
isDone: 'bool'
timestamp: 'date',
}
};
​
.NET
public class Item : RealmObject
{
[PrimaryKey]
[MapTo("itemId")]
public string ItemId { get; set; } = Guid.NewGuid().ToString();
​
[MapTo("body")]
public string Body { get; set; }
​
[MapTo("isDone")]
public bool IsDone { get; set; }
​
[MapTo("timestamp")]
public DateTimeOffset Timestamp { get; set; }
}
}

It should now look like this

Swift
Java
Javascript
.NET
Swift
class Project: Object {
@objc dynamic var _id: ObjectId = ObjectId.generate()
@objc dynamic var _partition: String = ""
@objc dynamic var body: String = ""
@objc dynamic var isDone: Bool = false
@objc dynamic var timestamp: Date = Date()
override static func primaryKey() -> String? {
return "_id"
}
}
Java
public class Project extends RealmObject {
@PrimaryKey
@Required
private ObjectId _id = new ObjectId();
@Required
private String _partition;
​
@Required
private String body;
​
@Required
private Boolean isDone;
​
@Required
private Date timestamp;
// getters and setters
}
Javascript
const ProjectSchema = {
name: 'Project',
primaryKey: '_id',
properties: {
_id: "objectId",
_partition: "string?",
body: "string",
isDone: "bool",
timestamp: "date",
}
};
.NET
public class Foo : RealmObject
{
[PrimaryKey]
[MapTo("_id")]
public ObjectId Id { get; set; } = ObjectId.GenerateNewId();
[MapTo("_partition")]
public string _partition { get; set; }
​
[MapTo("body")]
public string Body { get; set; }
​
[MapTo("isDone")]
public bool IsDone { get; set; }
​
[MapTo("timestamp")]
public DateTimeOffset Timestamp { get; set; }
}

You may have noticed that there is an additional field in the object with _partition

This field is critically important - this field is what maps your legacy realm full-sync paths to the MongoDB. For instance, if you had legacy realm path on Realm Object Server which said /myTeam/myUser you may want to map that to _partition that is a string which corresponds to the same legacy realm path but in string format. For more information on MongoDB Realm partitions, please see here -

Types

Depending on your legacy Realm schema you may need to convert some of your types. This is because MongoDB does not support these types. One of the most obvious ones is the float type. You will need to convert this to a double or similar type in order to sync to MongoDB. The supported types are documented here for each respective language -

Implement the Realm App Interface

One new thing you will need to implement in your app in order to connect to MongoDB Realm, is to initialize the App object. This is initialized before opening the realm and often comes in the SceneDelegate for an iOS app or the App class in an Android app

Swift
Java
Javascript
.NET
Swift
let app = App(id: "my-realm-app-id")
let configuration = AppConfiguration(
baseURL: "https://realm.mongodb.com", // Custom base URL
localAppName: "My App",
localAppVersion: "3.14.159",
defaultRequestTimeoutMS:30000
);
​
let app = App(id: "my-realm-app-id", configuration: configuration)
Java
String appID = "<your App ID>"; // replace this with your App ID
Realm.init(this); // `this` is a Context, typically an Application or Activity
App app = new App(new AppConfiguration.Builder(appID)
.appName("My App")
.requestTimeout(30, TimeUnit.SECONDS)
.build());
Javascript
import Realm from 'realm';
​
let app;
​
const appId = '<enter your Realm app ID here>'; // Set Realm app ID here.
const appConfig = {
id: appId,
timeout: 10000,
app: {
name: 'default',
version: '0',
},
};
app = new Realm.App(appConfig);
.NET
namespace MDBRealm
{
public static class Examples
{
private const string AppId = "<Your MongoDB Realm App Id Here";
private const string Partition = "my partition";
public static App CreateApp()
{
return App.Create(new AppConfiguration(AppId)
{
LocalAppName = "My example app",
LocalAppVersion = "1.2.3"
});
}
}
}

For direct code examples of how to implement this please see the sample apps for your respective platform here -

Use the New Login Method for Authentication

You will now use Realm App instance to authenticate to the new MongoDB Realm. For instance, if your user authentication on legacy Realm uses -

​

Swift
Java
Javascript
.NET
Swift
let auth_url = URL(string: "https://myinstance.cloud.realm.io")!
let creds = SyncCredentials.usernamePassword(username: "username", password: "password", register: false)
​
SyncUser.logIn(with: creds, server: auth_url, onCompletion: { [weak self](user, err) in
if let _ = user {
// User is logged in
} else if let error = err {
fatalError(error.localizedDescription)
}
} register:NO];
Java
String authURL = "https://myinstance.cloud.realm.io";
SyncCredentials credentials = SyncCredentials.usernamePassword(username, password, false);
​
SyncUser.logInAsync(credentials, authurl, new SyncUser.Callback<SyncUser>() {
@Override
public void onSuccess(SyncUser user) {
// User is logged
}
​
@Override
public void onError(ObjectServerError error) {
// Handle error
}
});
Javascript
const authUrl = 'https://myinstance.cloud.realm.io';
let creds = Realm.Sync.Credentials.usernamePassword('username', 'password', true) // createUser = true
​
Realm.Sync.User.login(authUrl, creds).then(user => {
// user is logged in
// do stuff ...
}).catch(error => {
// an auth error has occurred
});
.NET
var authUrl = new Uri("https://myinstance.cloud.realm.io");
var credentials = Credentials.UsernamePassword(username, password, createUser: false);
var user = await User.LoginAsync(credentials, authUrl);

Now it will look like this -

Swift
Java
Javascript
.NET
Swift
let email = "[email protected]"
let password = "12345"
app.login(credentials: Credentials.emailPassword(email: email, password: password)) { result in
DispatchQueue.main.async {
switch result {
case .success(let realm):
//do something with realm
case .failure(let error):
print(error.localizedDescription)
}
}
Java
Realm.init(this); // context, usually an Activity or Application
String appID = "<your app ID>"; // replace this with your App ID
App app = new App(new AppConfiguration.Builder(appID)
.build());
​
Credentials emailPasswordCredentials = Credentials.emailPassword("<email>", "<password>");
​
app.loginAsync(emailPasswordCredentials, it -> {
if (it.isSuccess()) {
Log.v(TAG, "Successfully authenticated using an email and password.");
user = app.currentUser();
} else {
Log.e(TAG, it.getError().toString());
}
});
Javascript
async function loginEmailPassword(email, password) {
// Create an anonymous credential
const credentials = Realm.Credentials.emailPassword(email, password);
try {
// Authenticate the user
const user = await app.logIn(credentials);
// `App.currentUser` updates to match the logged in user
assert(user.id === app.currentUser.id)
return user
} catch(err) {
console.error("Failed to log in", err);
}
}
loginEmailPassword("[email protected]", "Bogano123!").then(user => {
console.log("Successfully logged in!", user)
})
.NET
public static async Task<User> GetOrLoginAnonUser(App app)
{
return app.CurrentUser ?? await app.LogInAsync(Credentials.Anonymous());
}
public static async Task<User> RegisterAndLoginUser(App app, string email)
{
// Register and login an email-password user. Email/password auth needs
// to be turned on and auto confirm users must be on.
​
await app.EmailPasswordAuth.RegisterUserAsync("ian", "super-secure");
return await app.LogInAsync(Credentials.EmailPassword("ian", "super-secure"));
}

For any different authentication providers you can check the documentation here -

MongoDB Realm offers far more built-in providers than legacy Realm did and it offers more flexibility, such as the ability to set-up a custom authentication function trigger to execute arbitrary logic when a user authenticates -

Open a Realm with a Partition key value

With the legacy Realm system, Realms were namespaced - they had a value such as /myTeam/myProject when declaring from the client side which data you wanted to fetch - this full-sync architecture translates to MongoDB Realm by adding a field to each document in MongoDB you want to sync. While we use the _partition field in many of our examples, it could be any field as long as it is an ObjectId, Int, or String. A partition of your entire MongoDB database maps to a realm in legacy Realm - you can read more about this here

To sync with MongoDB Realm you must convert your code which opened a realm URI, to code that opens the realm with a partitionKey based on the MongoDB document partition key field. For instance, if you had this in legacy Realm -

Swift
Java
Javascript
.NET
Swift
// Create the configuration
let syncServerURL = URL(string: "realms://myinstance.cloud.realm.io/~/userRealm")!
let config = user.configuration(realmURL: syncServerURL);
​
// Open the remote Realm
let realm = try! Realm(configuration: config)
// Any changes made to this Realm will be synced across all devices! register:NO];
Java
// Create the configuration
SyncUser user = SyncUser.current();
String url = "realms://myinstance.cloud.realm.io/~/userRealm";
SyncConfiguration config = user.createConfiguration(url).build();
​
// Open the remote Realm
Realm realm = Realm.getInstance(config);
// Any changes made to this Realm will be synced across all devices!
Javascript
const user = Realm.Sync.User.current;
user.createConfiguration({
sync: { url: "realms://myinstance.cloud.realm.io/~/userRealm",
error: err => console.log(err)
},
schema: // ...
});
​
var realm = new Realm(config);
.NET
var user = User.Current;
var serverURL = new Uri("/default", UriKind.Relative);
var configuration = new QueryBasedSyncConfiguration(serverURL, user);
​
var realm = Realm.GetInstance(configuration);

You would now have code in MongoDB Realm that looked like so -

Swift
Java
Javascript
.NET
Swift
let app = App(id: "myRealmAppId")
// ... log in ...
let user = app.currentUser()!
let partitionValue = "myProject"
let realm = try! Realm(configuration: user.configuration(partitionValue: partitionValue)) register:NO];
Java
val user: User? = app.currentUser()
val partitionValue: String = "myPartition"
​
val config = SyncConfiguration.Builder(user!!, partitionValue)
​
.build()
​
var realm: Realm
// Sync all realm changes via a new instance, and when that instance has been successfully created connect it to an on-screen list (a recycler view)
Realm.getInstanceAsync(config, object: Realm.Callback() {
override fun onSuccess(_realm: Realm) {
// since this realm should live exactly as long as this activity, assign the realm to a member variable
realm = _realm
}
})
Javascript
const config = {
schema: [MySchema],
sync: {
user: user,
partitionValue: partitionValue,
},
};
try {
let realm = await Realm.open(config);
} catch (error) {
console.error(error);
}
.NET
private const string AppId = "myApp";
private const string Partition = "my partition";
​
public static Task<Realm> OpenRealm(User user)
{
var config = new SyncConfiguration(Partition, user);
return Realm.GetInstanceAsync(config);
}

Migrating Permissions

MongoDB Realm supports generally the same permissions that full-sync legacy Realm did. Each user can have either read or read/write permissions per-partition. One important difference is that all permissions are executed from the server side in MongoDB Realm - if permission changes are needed to be made when executed from the client, then Functions should be leveraged. Please take a look at Sync rules to see specifics -

All legacy full-sync Realm rules should be transferrable to MongoDB Realm. The easiest way to do so is to use custom user data -

Then add the partitionKey values for which they are allowed to see to the user's custom user data. Then define permissions -

Let's take an example. Suppose you had a legacy Realm called /projectX with UserA having read permissions to said Realm and UserB having write permissions to /projectX

The same permissions would be reflected to MongoDB Realm for UserA by inserting a readPartition under UserA's customData. You would then have a permission in MongoDB Realm that looked like so -

{ "%%user.custom_data.readPartitions" : "%%partition" }

In this expression, the user's array of readPartitions is matched against the partition value they are requesting. If it matches they can sync, if it doesn't match then they are rejected as unauthorized.

For UserB, the expression would look like so -

{ "%%user.custom_data.writePartitions" : "%%partition" }

Legacy Realm sync also allowed users to make permission changes when offline. This system is not available in MongoDB Realm however a function can be called allowing a user to change permissions of another user. This server side function could be called to change a user's custom user data and give them permission to the realm in question by inserting a new entry in their custom user data. This can be seen here -

With corresponding client side code here for your respective language -

Refactor Event processing APIs

The version 10 of the server-side Realm SDKs (node.js & .NET) do not contain the Global Notifier or Adapter APIs. In order to move to MongoDB Realm the code must be transferred to either a MongoDB Realm trigger or leverage the MongoDB's changestream functionality.

One possible solution to use to implement an event processing schema is to leverage MongoDB Realm triggers, as documented here -

I say possible because it really depends on your requirements. MongoDB Realm triggers do not offer any fault tolerance or the ability to retry; they are only able to fail and be retried when the MongoDB Realm cloud architecture fails to fire, not when an error response code is returned from a 3rd party API. For this reason we would generally recommend using MongoDB changestreams -

Changestreams offer you the ability to get all synced changes which are inserted into the MongoDB Atlas instance. You would tail the MongoDB opLog, observing only the sync collections (read: realm objects) and then perform your business logic or shuttle the new sync change to a 3rd party or other system of record. The changestream API has the ability to store a token which logs its most recently processed change - this can be stored in a fault tolerant message queue such as Kafka. In fact, there are pre-build connectors for Kafka Confluent cloud for MongoDB, as both a source and a sink, which can fell this need. See here -

Legacy Query-based Sync

Legacy Query-based Sync is not supported in the new version of Realm Sync. The partition based strategy of the new Realm Sync is analogous to the "full-sync" strategy of legacy Realm Sync. However, there are benefits to this new partition strategy that were not available to legacy sync users; the most poignant being that you can now run cross realm (partition) queries on the MongoDB collection.

If your app is designed with legacy query-based sync then there are a couple of ways to migrate.

One of these is that you can use a different partitionKey field for two separate realms that are opened on the client side. A MongoDB Realm cloud app only allows for a single partitionKey field when you enable sync, however, there is nothing preventing you from creating a second Realm cloud app with a different partitionKey field and connecting it to the same Atlas backend. You would need to clone your configuration over and you would have to auth twice but this enables some more flexibility. For instance, you can imagine an app where there are salespeople in the field and you only want salespeople to see their own leads and contacts. However their manager should see an amalgamation of all the salespeople’s leads and contacts that report to them. A manager would login to the managerCloudRealmApp and use the managerId as the partitionKey. Where as a salesperson would login to the salespersonCloudRealmApp and use the salespersonId as the partitionKey. For example your schema in this example could look like this -

Lead {
_id : string // The id of the lead
salespersonId : string // The id of the salesperson who owns this Lead
managerId: string // The id of the manager whom the salesperson reports to
}

Another option is to denormalize the data and create copies of the data in each individual user’s realm. You can use Database Triggers -

​

To copy changes from one content document that receives sync changes to any other user’s realm which also has that content document. You can use a contentId or similar as a stable identifier - when changes occur, you query for any other documents that also have that contentId and apply the same changes to the document. You can imagine optimizations to this, such as a lookup table or embedding metadata in the content document if needed.

Migrating Data

Because the v10 version of the SDKs is incompatible with the legacy Realm SDKs, all data must be migrated to MongoDB first and then re-downloaded by the now upgraded v10 Realm SDKs.To do this, it is recommended to open each realm on the legacy Realm Object Server or cloud and iterate through each object, copying it to JSON, and then inserting it into MongoDB with the correct partitionKey value.

At this point your schema should instantiated on MongoDB Realm by connecting with a sample app to the new cloud - the below script will open a target realm and copy data to MongoDB

// Copy Realm to Mongo
​
​
const Realm = require('realm');
const { MongoClient } = require("mongodb");
​
// Replace the uri string with your MongoDB deployment's connection string.
// UPDATE THESE
const uri =
"<YOUR MONGODB ATLAS CONNECTION STRING>";
const realm_server = '<YOUR REALM OBJECT SERVER DOMAIN NAME>';
const username = '<YOUR REALM USERNAME>'; // this is the user doing the copy
const password = '<YOUR REALM PASSWORD>';
const legacy_realm_path = '<YOUR REALM PATH ON THE SERVER>'; // path on server
​
const client = new MongoClient(uri);
​
async function run() {
const creds = Realm.Sync.Credentials.usernamePassword(username, password);
const user = await Realm.Sync.User.login('https://' + realm_server, creds);
const config = user.createConfiguration();
config.sync.fullSynchronization = true;
config.sync.url = 'realms://' + realm_server + legacy_realm_path;
config.schema = [Item = {
name: 'Item',
primaryKey: 'itemId',
properties: {
itemId: 'string',
body: 'string',
isDone: 'bool',
timestamp: 'date'
}
}];
​
const realm = await Realm.open(config);
console.log('opened realm');
const allObjects = realm.objects('Item');
​
console.log(allObjects.length);
const documents = allObjects.map((obj) => obj.toJSON());
​
try {
await client.connect();
​
const database = client.db('myDatabase');
const collection = database.collection('Item');
await collection.insertMany(documents);
} finally {
await client.close();
}
}
​
run();

To run the above script, copy it to a file named copyToMongo.js then

nvm use 12
npm install [email protected]6.1.4
npm install mongodb
node copyToMongo.js

​

Rollout

Dev-Only

If the app is only in development then the majority of the work should be in refactoring the code to now use the new APIs available in the SDK as well working with MongoDB documents instead of Realm Objects on the server side. The refactor should include all steps outlined in the above sections.

Production

There are varying levels of complexity for apps in production, and if they are read/write apps on the client then there is a possibility of data loss during the migration so care must be taken and steps followed in order. This is because the user may make changes while offline when you are in the middle of your app upgrade and in process of transferring data from legacy Realm Server to MongoDB Cloud. These unsynced changes could be lost but there are ways to guard against this. In order to protect yourself against this you should -

  • First check on app launch if a new version of the app is available (this can be done local but more than likely some control is wanted from the server side outside of iTunes in which case a REST endpoint or MongoDB Realm function could be used), if there is a new version, then you must block user flow through the app and disallow them to make new edits. This is important because in order to not lose data - you must first upload all unacknowledged changes to the server and then allow the user to upgrade the app. Consumers vs enterprise apps have different situations -

    • Users in consumers apps delivered by the iTunes store do not have much control of the upgrade process by the developer. The user can control the upgrade because this can be auto-updated without a user launching the app - controlled by Settings

    • In enterprise apps there can be scheduled downtimes during off hours which allow app updates to be controlled by a Mobile Device Management (MDM) solution. In this way, the developer can have more explicit control over how and when their employees upgrade. If a MDM solution is not present then a more restrictive low-tech solution should be implemented, such as email notifications and a UI blocking tool based on new app versions available.

  • One way to guard against the possibility of users making changes offline while an upgrade is in progress, we recommend to have a version of the application that splits writes across a local and a synced Realm and then, on update to the new v10, picks-up any unsynced changes from the local realm and applies them. This would require keeping track of whether data has been synced or not-synced.

Prod - readOnly

In this situation there is no danger for data loss on the client because the client is not making changes. There is still coordination needed to coordinate the update because likely the transfer of data from legacy Realm server will be a one-time event. In this case updates from a remote source will now flow to the MongoDB Realm database instead of the ROS cloud or local instance.

Prod - realmAsAPassThrough

There are deployments where the legacy realm server is used as passthrough for mobile data to be inserted into another system of record or database. In these cases, there is often a β€œrehydration” procedure which reseeds the data into the legacy realm server in case of catastrophic data loss. In this case, there is already an ETL job written which takes the data from the remote data source and writes it to Realm using a Realm SDK API or the legacy Realm GraphQL. In this case the app logic that writes to a realm must be updated to now either use the new Realm v10 version syntax or use a MongoDB driver.

As a corollary to the above, this separate datastore could be MongoDB. If so, then the above app logic migration can be skipped. Instead, if it is an Atlas cluster then sync must be enabled, along with all of the client SDK migrations noted above . If it is an on-premise MongoDB then you can migrate to Atlas using mongoMirror and then enable sync.

Prod - private read/write

In this scenario, the UI upgrade blocker will detect that the client has unsynced changes and also detect that a new version is available. It will stop the user from entering the app and display β€œUploading unSynced Data.” Progress notifications can be used to determine that all changes have been uploaded. Once the upload is complete the client could then signal a function which triggers the data migration app to transform the recently uploaded realm data to MongoDB Atlas. Once the transfer is complete the client is then signaled to prompt the user to update the app. At this point, the user will then need to re-authenticate and if everything has been correctly moved over, the client will redownload the data they have just uploaded in the new sync format and proceed to use the app. If you can guide your users to upload all data before a scheduled maintenance period then the remote triggering of the data migration app can be done en masse and all users can be migrated at the same time which removes the logic of forcing users to upload all changes before upgrading.

Prod - shared read/write

This scenario, which means that different clients/users share the same set of writable data and can be offline at any point; this the most difficult to guard against because the triggering of the data migration app works best when done once per realm based upon when that user has uploaded all their changes and has signaled that they are ready to begin their upgrade. If there are multiple users that are offline making changes and coming back online at different times there are a couple options, all can have high potential for errors - A few solutions are presented below

  1. The backend can keep track of all users on a particular realm and then logic in the app can prevent users from using the app once all unsynced changes are uploaded, then an upload to MongoDB can be triggered.

  2. The backend can signal the user that has upgraded to proceed and now point to MongoDB Realm where it will redownload their data. A batch process or ETL job could then pick up any changes made by later offline users which came back online and push them to Atlas. This allows users to work but has a potential for conflict resolution to break since data is now being copied into a a new system - additionally, users will not be able to share data between different versions of the app.

  3. The data migration app could be run in real-time, replicating data back and forth between the legacy realm cloud server and MongoDB Atlas for a shared realm while user’s slowly migrate over to the new product. This would allow users to continue to share data and not block their work.

Reach out for Help

We realize that you may have questions while you undergo this migration process that are specific to your use case and architecture - rest assured that we are here to help. We have partnered with a service integration, WeKan, to help users migrate to the new sync solution. WeKan has created a custom package for migration from legacy to MongoDB Realm. You can see the engagement details here If you need additional help please fill out this form and we will be sure to get in contact with you to help as best we can