Postgres Connector

Distribution of the Postgres data connector package is currently paused. This page is only available for reference. We apologize for the inconvenience.

This guide walks you through how to use Realm’s data adapter to sync data from the Realm Object Server to a Postgres Server and vice versa. This shows an installation on a bare CentOS server of both Postgres and the Realm Data Adapter but any other platform that allows the enabling of logical replication could also be used. If you already have an existing Postgres server you can skip to the step that starts with installing the Realm Data Adapter.


  • Postgres 10.x (or lower)

  • Realm Object Server 3.11.0 (or higher) or Realm Cloud

Setting up your Postgres Instance

Already have a Postgres Server that you'd like to use? You'll need to enable logical replication on your server. More detailed instructions can be found in the install walkthrough below.

Quickstart via Amazon RDS
Manual Install via CentOS
Quickstart via Amazon RDS

Amazon RDS provides a way to quickly spin up a database.

You'll want to start by creating a Postgres instance. We suggest using Postgres 9.6. While creating the instance, spec it out however you like. Make sure to make note of the DB instance identifier , master username and master password. We will assume that we are using the default port of 5432.

After creating the instance, you may test connecting to it.

We will then need to enable logical_replication which facilitates the data synchronization with Realm.

To do this, you will need to create a new parameter group within your Amazon RDS. The parameter group family should match your version of Postgres (i.e. postgres9.6). You can keep all of the settings as the default except rds.local_replication which needs to be set to 1.

More information on logical replication in RDS can be found here.

Manual Install via CentOS

Installing Postgres

If you don't have a working installation of Postgres yet, you'll want to start by installing one. It is common to do this with YUM. You can find more details in their official documentation.

Now let’s initialize the Postgres server

[[email protected] ~]# /usr/pgsql-9.6/bin/postgresql96-setup initdb

Now we need to configure the Postgres server to accept the Realm data adapter connection. It is imperative to enable logical_replication as the adapter relies on this feature for data synchronization.

# Open the Postgres conf file and add the following lines
[[email protected] ~]# vi /var/lib/pgsql/9.6/data/postgresql.conf
wal_level = logical
max_wal_senders = 8
wal_keep_segments = 4
max_replication_slots = 4

Logical replication is a built in streaming replication feature that allows clients to subscribe to the transaction logs. The transactions can then be parsed and used to generate Realm write transactions using the Realm Data Adapter. These are the settings we recommend for most installations but they could be changed based on your performance or architectural considerations.

You will also need to find the lines

#listen_addresses = 'localhost'
#port = 5432

And uncomment them and configure the listening address and port. By using a * on the address it will listen on every IP on the server which is useful in initial set-up for dev but you may want to lock this down in the future.

listen_addresses = '*'
port = 5432

Now let’s edit the conf file to allow for password authentication and a remote replication host

[[email protected] ~]# sudo vi /var/lib/pgsql/9.6/data/pg_hba.conf

Find the lines that look like this at the bottom and change them to use MD5 and add the IP address of your realm data adapter server

host all all ident
host all all ::1/128 ident
#host replication postgres ident

Now they should look like this

host all all md5
host all all ::1/128 md5
host replication postgres <IP_ADDRESS_OF_ADAPTER_SERVER>/32 md5

Now let’s enable and start our Postgres server

[[email protected] ~]# systemctl enable postgresql-9.6
[[email protected] ~]# systemctl start postgresql-9.6

Now let’s setup a password by switching to the postgres user

[[email protected] ~]# su - postgres
#Enter the psql prompt
-bash-4.2$ psql
#Configure a password for the postgres user
postgres=# \password postgres

You should now be able to connect to the Postgres server remotely. There are various tools you can use online to connect and browse Postgres such as Postico .

Realm Data Connector

The following section will walk you through the setup of the Realm Data Adapter which facilitates bidirectional sync between the Realm Object Server and a Postgres Server. **The Realm Data Adapter is a node process which will run using Realm-JS and its own NPM package. If you do not yet have a running instance of the Realm Object Server, sign up for a cloud instance or see instructions on how to install a self-hosted Realm Object Server.


Distribution of the Postgres data connector package is currently paused. We apologize for the inconvenience.

You can see release notes for the latest releases on Github, and watch the repo if you want to be notified about new releases.



SSH to your new CentOS server for the data adapter

#Install EPEL which contains the node repos we need
[[email protected] ~]# sudo yum -y install epel-release
#Install nodejs - it should come with npm
[[email protected] ~]# sudo yum -y install nodejs
#We will also need to install node-gyp and postgresql-devel
[[email protected] ~]# sudo yum -y install node-gyp
[[email protected] ~]# sudo yum -y install postgresql-devel
#Now install the Realm Data Adapter npm package
#you will need to update your path and version information according to the package you received
[[email protected] ~]# sudo npm install ~/Downloads/realm-postgres-adapters-1.2.1.tgz
#Now install Realm and the Realm Data Adapter npm package
#create a new npm project (name as you like)
npm init
npm install ~/Downloads/realm-postgres-adapters-1.4.2.tgz

If you are using version earlier than 1.3.0 you will also need to npm install realm.

If you are using Windows 10 as operating system, the Postgres driver might not install correctly. Try vcpkg install libpq:x64-windowsto install all dependencies .


The data adapter relies on a number of components for two way data synchronization.

  • Data Adapter Package: This contains the core codebase of the adapter

  • An adapter script: This runs as a process to facilitate synchronization

  • Model definition: This defines the schema to be synchronized

  • Configuration variables: this defines a number of variables which are used during adapter configuration

While the breakdown of these components could be done in many ways, we'll show you a basic architecture with three files.

Prepare your Config File

This file will contain the information required to connect to your Postgres Server. You'll need your connection information for both Realm and Postgres available. You might outline your file like so:

module.exports = {
// Database name
database_name: "<INSERT_PG_DATABASE_NAME>",
// Realm Object Server Information
//self-hosted: realm://
//cloud: realms://
realm_object_server_url: "realm://<IP_OR_DNS_OF_ROS>",
auth_server_url: "http://<IP_OR_DNS_OF_ROS>",
admin_username: "<ADMIN_USER>",
admin_password: "<ADMIN_USER_PASSWORD>",
// The synced Realm path for the data ie. 'postgresRealm'
// Note: be careful not to start this path with a `/` if you have a trailing slash in your realm_object_server_url above
target_realm_path: '<URI_OF_REALM>',
// Postgres config used for all connections - replace with your data
postgres_config: {
port: 5432,
user: '<POSTGRES_USER>',

Define your Models

You will need to let the adapter know which data you would like to sync between Realm and Postgres. You will do this by defining the schema of your data models which is then passed into the adapter process. This schema definition follows the syntax used by our Javascript SDK.

Below you'll find an example models file:

const Bus = {
name: 'Bus',
primaryKey: 'vehicle',
properties: {
Scheduleid: {type: 'Schedule'},
vehicle: 'int',
fleet_id: { type: 'string', optional: true },
last_main: { type: 'date', optional: true }
const Driver = {
name: 'Driver',
primaryKey: 'id',
properties: {
Busvehicle: {type: 'Bus'},
id: 'int',
name: { type: 'string', optional: true },
employ_date: { type: 'date', optional: true }
const Schedule = {
name: 'Schedule',
primaryKey: 'id',
properties: {
id: 'int',
Routeid: {type: 'Route'},
departure: { type: 'date', optional: true },
arrival: { type: 'date', optional: true }
const Route = {
name: 'Route',
primaryKey: 'id',
properties: {
id: 'string'
module.exports = [

There are a couple of things to notice when looking at the code and comparing it to the Postgres schema. The first thing you will notice is that not all the tables are reflected as Realm objects. By only listing a subset, the adapter will ignore the other tables and only sync Bus, Schedule, Driver, and Route tables. Additionally, any fields that you do not care about syncing can be dropped from the data model, you can see that fare is missing from Route schema. In order for the Postgres adapter to work all Object must have a primary key so that lookups can resolve in a timely manner.

Foreign Keys

The other important point demonstrated in this mapping is the Routeid property which in Postgres is a foreign key to the primary key in the Route table. However, instead of defining this as a string in Realm, we use Realm's link support to define this as a link to the corresponding Route object. The adapter will automatically resolve the foreign key relationship into a Realm link!

The data structure between Route and Schedule is that each Route row/object will link to 0 or more Schedule row/object(s). The number of linked Schedules represents the number of schedules of each route. Thus in the app as you add more schedules to a route object, the app will create Schedule object which the adapter will then convert into a new row in the Schedule table.

Similarly, the reverse will happen, such that if you remove a Schedule row from Postgres, the adapter will convert this change and delete the corresponding Schedule object in Realm.


Realm also has the ability to support List properties as shown here.

When the data adapter encounters a List property it will go and create a subtable on Postgres with three columns: oid, idx, and target. The oid is the primary key of the main table, target is the primary key of the list table, idx is the index into an array.

The concept of lists does not typically exist in relational databases, the data adapter was designed to be one-way only for lists so if you want to maintain order you will need to use a stored procedure in Postgres. The indexes are zero-based and they need to be contiguous and unique. If you don't need ordered lists you can change your model to reference the parent object which means list properties are no longer needed

Your Adapter Script

Finally, we'll need to prepare and ultimately run our adapter script to test the synchronization. We'll present an example script below and then go through many of the configuration options which are available in a section below. You'll notice that our constants point to the configuration and models files which we prepared above.

const Realm = require('realm');
const fs = require('fs');
const path = require('path');
const PostgresAdapter = require('realm-postgres-adapters').PostgresAdapter;
const Config = require('./config');
const Models = require('./realmmodels');
async function main() {
//login as an admin user
var admin_user = await Realm.Sync.User.login(Config.auth_server_url, Config.admin_username, Config.admin_password)
// Print out uncaught exceptions
process.on('uncaughtException', (err) => console.log(err));
var adapter = new PostgresAdapter({
// Realm configuration parameters for connecting to ROS
realmConfig: {
server: Config.realm_object_server_url, // or specify your realm-object-server location
user: admin_user,
dbName: Config.database_name,
// Postgres configuration and database name
postgresConfig: Config.postgres_config,
resetPostgresReplicationSlot: false,
// Set to true to create the Postgres DB if not already created
createPostgresDB: true,
initializeRealmFromPostgres: false,
// Map of custom types to Postgres types
/*customPostgresTypes: {
'USER-DEFINED': 'text',
'ARRAY': 'text',
'mpaa_rating': 'text',
'year': 'integer',
// Set to true to indicate Postgres tables should be created and
// properties added to these tables based on schema additions
// made in Realm. If set to false any desired changes to the
// Postgres schema will need to be made external to the adapter.
applyRealmSchemaChangesToPostgres: true,
// Only match a single Realm called 'myRealm'
realmRegex: Config.target_realm_path,
// Specify the Realm name all Postgres changes should be applied to
mapPostgresChangeToRealmPath: Config.target_realm_path,
// Specify the Realm objects we want to replicate in Postgres.
// Any types or properties not specified here will not be replicated
schema: Models,
printCommandsToConsole: true,

After you've finishing setting up your adapter file, you can run it with node via your terminal:

node adapter

When in development it may be necessary to reset your environment as you test things. For example, you may be solidifying your model configuration and make a change which is considered destructive. We've created a simple reset script which you can use to clean up your environment. You can find it here.

Detailed Configuration Options

Environment Setup and Rules

From the explanations above, we know that the adapter uses Postgres replication slots to facilitate sync from Postgres to Realm.

resetPostgresReplicationSlot?: boolean,

This optional config variable will reset the replication slot on the Postgres server when the adapter first boots up, this is useful for testing, since it will ignore any previous changes from Postgres since the last run.

When first running the adapter, you will need to make a few selections for handling the flow of data.

createPostgresDB?: boolean,
initializeRealmFromPostgres?: boolean,

The first option creates the Postgres database assuming it does not already exist using the configuration parameters you provided and creates tables using the realm object schema. The second option does the reverse, it creates the realms on the Realm Object Server matching the already existing Postgres schema. Typically, these are configured with one true and the other false. If you are adding a Postgres server to an already existing deployment of Realm Object Server you would set createPostgresDB to true and initializeRealmFromPostgres to false to ease in the setup of your Postgres server and minimize any schema mismatch issues. If you are adding Realm Object Server onto your existing Postgres database you will set createPostgresDB to false and initializeRealmFromPostgres to true for the same reasons.

While your adapter runs (especially in development), it is possible that you may decide to change your Realm model schema.

applyRealmSchemaChangesToPostgres?: boolean

This option will execute SQL commands to extend the schema anytime the data model in the mapped Realm changes. It will create new tables or add new columns when new fields are added to your Realm objects.

Custom Types


This config option accepts a dictionary of key value pairs in JSON format. Postgres supports the ability to create custom types, if you have these in your Postgres database that you will mapping to Realm then you need to create a corresponding type in Realm that corresponds to your custom Postgres type. You can find the supported Realm types here

Mapping the data


This option takes a regular expression string and tells the adapter what Realms to monitor for changes.

Let’s say the realm you were trying to monitor is


You could use this in the config:

realmRegex: ‘^/myRealm$’

Or let’s say that each user is opening a realm with the URL realm:


The ~ here expands into a userID on the server to get the URL

fb80255953a5eba491671781778d3e91/myRealm for example. In this case there would be a realm created per user and you could match all of them with:

realmRegex: ‘^/(.*?)/myRealm$’

Mapping a Postgres Table to a Specific Realm


Is used to map a table and its properties to the appropriate realm path. This can be either a string literal or a function that returns a string. If you were only replicating a single realm then you would pass in the same regular expression that you gave for realmRegex, for instance: ‘^/myRealm$’

However, in the case where each user opens their own instance of myRealm you will need to map the Postgres change to the correct realm user. You can do this by using a function like this:

mapPostgresChangeToRealmPath: (tableName, props) => ‘/${props.userID}/myRealm’

You can also go the opposite way by using this config parameter:


This option is less commonly used because a Realm object typically maps to a Postgres table - with each user’s realm entries mapping to rows in the appropriate table. However, in case you do need to map Realm changes to specific tables based on custom logic you can call a function like so:

mapRealmChangeToPostgresTable?: (realmPath) => { tableName: string, extraProperties: any }

Given the realmPath that the change occurred on you must return the tableName in Postgres that you want to write the change to and a extraProperties Javascript object which allows mapping realm data into other Postgres columns. You want to support more columns to convert data that is in realm as a single property. Such as if you wanted to assign an ID in Realm to two columns in Postgres.

Mapping Postgres Columns to Realm Models

By default, the adapter will sync a Postgres column to a Realm model of the same name. However, configuration options are exposed in the event that you'd like to rename your Postgres columns into something that is more friendly for your app developers.

You can accomplish this with the following functions:

mapPostgresTableName: (table_name)
mapRealmClassName: (class_name)

where the table_name is the name of your column/table in Postgres and the class_name is the desired name of your class/model within the Realm Object Server.

For example:

Let's imagine that we have a table in Postgres called Driver which we'd like to rename to a Realm class called Chauffeur. The following configuration options would be used from your adapter script.

mapPostgresTableName: (table_name) => {
if (table_name === 'Driver') {
return 'Chauffeur';
return table_name;
mapRealmClassName: (class_name) => {
if (class_name === 'Chauffeur') {
return 'Driver';
return class_name;

You will also need to make sure your class name is defined correctly within your models definition. For example:

const Driver = {
name: 'Chauffeur',
primaryKey: 'id',
properties: {
Busvehicle: {type: 'Bus'},
id: 'int',
name: { type: 'string', optional: true },
employ_date: { type: 'date', optional: true }


The adapter uses winston for logging, it creates a default logger that logs to Console, you can control the default log level (info) using logLevel parameter. For a list of available log levels, see the following page.

var adapter = new PostgresAdapter({
// ...
logLevel: 'verbose',// use LogLevels.VERBOSE for TypeScript

You can also provide a custom winston logger using the logger parameter.

var adapter = new PostgresAdapter({
// ...
logger: winston.createLogger({ /* ... */ }),

Alternatively, we provide another logger that outputs to a file instead of Console. This is helpful for verbose logging as sometimes the Console buffer can't keep up which causes the node process to crash (see issue).

const createFileLogger = require('realm-postgres-adapters').createFileLogger;
var adapter = new PostgresAdapter({
// ...
logger: createFileLogger(loglevel, filename),

Both loglevel and filename arguments are optional. loglevel will default to info if not specified and filename (which indicates an absolute path) will default to a file named realm-adapter-log-YYYY-MM-DD-MM-SS.txt(created in the current directory where node was started).

For debugging purpose it could be helpful to log the SQL commands, in order to activate this feature, the log level must be set to verbose and printCommandsToConsole set to true in the configuration.

Retrying Failed Queries

In case of failure errors (PGRES_FATAL_ERROR), Postgres queries can be retried. The retry logic can be configured using two options delayInMilliseconds and numberOfRetries.


postgres_config: {
port: 5432,
user: '<POSTGRES_USER>',
password: '<POSTGRES_PASSWORD>',
retryQuery: {
numberOfRetries: 10, // default is 3
delayInMilliseconds: 500 // default is 10 milliseconds

Configuration Options Quick Reference



Type/Default Setting


Sets the value of time needed to pass before a connection timeout error is thrown.

String?: seconds


Indicates if Realms should be migrated to reference Realms when uploading to Realm Object Server.

Boolean: false


Indicates whether the adapter will create Postgres tables, columns, and constraints at start-up (before processing any Realm or Postgres changesets).

Boolean: true


Determines if adapter will reset the replication slot on the Postgres server when the adapter first boots up.

Boolean: false


Creates a Postgres database if it does not already exist based on the Realm Object schema.

Boolean: false


If set to true the adapter will create realms for every connected Postgres instance.

Boolean: false


If set to true the adapter will change the Postgres schema when the realm schema changes.

Boolean: false


This maps a table and its properties to the appropriate realm path.

String or Function

See example for details

All of the default Postgres types are automatically mapped and converted to corresponding Realm types as shown here:

Realm type

Postgres types


text, varchar, character, character varying, tsvector, json, bytea, uuid


numeric, decimal, double precision


bigint, smallint, integer




timestamp, timestamp without time zone, timestamp with time zone, date


not supported currently


While running the adapter, it is possible that you may hit an error. We list a few of the more common ones below:

RangeError: Maximum call stack size exceeded

This is a generic node error which indicates that the process needs more available resources. Details on increasing the stack size can be found here.


# example shows 16MB made available
node --stack_size=16384 adapter.js
# if you're using PM2
pm2 start adapter.js --node-args="--stack_size=16384"

JavaScript heap out of memory

This is a generic node error which indicates that the process needs more available memory. This error can be thrown when working with very large databases. You can easily increase the memory available to the process like so:


#example shows 16gb made available
node adapter.js --max_old_space_size=16384
# if you're using PM2
pm2 start adapter.js --node-args="--max_old_space_size=16384"