BackUP Server for OpenRPA

Hi @Allan_Zimmermann,

  1. I have installed onprem instance of OpenFlow in Linux server(Server A), I would like to know how to keep a backup server(Server B) that runs instantaneously if the server A down.

  2. Would It be possible to configure MongoDB installed in windows server (Server B) in OpenFlow instance(docker-compose.yml) that was installed in Linux Server(Server A).

Would be great If you could share your thoughts on this.

Thanks in Advance.

OpenFlow is Designed to be Scalable

Replicate MongoDB Only


  • On server 1, install MongoDB, RabbitMQ, and OpenFlow.
  • On server 2, install MongoDB.
  • Make server 2 a part of a MongoDB replica set.

Note: MongoDB only allows 1, 3, or more servers in a replica set, so you need to install MongoDB one more time on one of those servers. Since server 1 is busy, it would make the most sense to do that on server 2. You could use Docker on server 1 and manually add the two MongoDB instances on server 2 by updating the Dockerfile’s init container.


  • This could be a way to ensure your data, but anything you do to one of the databases will happen in all. So, this basically only saves data in case one of the servers goes down, but you now have all data saved three times. You should still make backups often in this type of setup.

Replicate MongoDB and RabbitMQ


  • On server 1, install MongoDB, RabbitMQ, and OpenFlow.
  • On server 2, install MongoDB twice and RabbitMQ.

Note: Setting up a RabbitMQ cluster this way might be possible, but having one node in Docker and one on a physical server is not something I have tried before. As long as you decide to use something that allows running RabbitMQ stateless, you should be okay.


  • One of the reasons you would replicate things is for scalability, so you can add more servers and get better performance under heavy load. With MongoDB, you can keep installing as many servers as you want. You can add one or more replica sets and then build shards on top of those. With RabbitMQ, you can either set up queues on different hosts or replicate everything between all nodes. This setup will be much more robust if one of the servers fails, but you still have a single point of failure in OpenFlow. Solving that becomes trickier.

Replicate MongoDB, RabbitMQ, and OpenFlow


  • On server 1, install MongoDB, RabbitMQ, and OpenFlow.
  • On server 2, install MongoDB, RabbitMQ, and OpenFlow.
  • Externally or with software on each server, make sure you can load balance traffic between servers.

Note: I have only done this with Cisco (hardware) and Windows load balancing service and F5. If I had to “pick a poison,” Windows load balancing is the easiest to set up and free with the right OS license, Cisco is the easiest to maintain but most expensive, and F5 is a middle ground.


  • If you can find a load balancer (hardware or software), then you could install OpenFlow on both servers and load balance the traffic between them. This way, if one of the servers goes down, the other will handle the traffic without downtime. Still, you need to back up the database often to avoid data loss. This is an almost perfect setup but can be hard to set up and maintain if you don’t have expertise in running these types of setups.

Docker Swarm Mode


  • On server 1, install Docker.
  • On server 2, install Docker.
  • Set up Docker Swarm Mode on both servers.
  • Update the Docker Compose file to set a replica count of 2 for OpenFlow.
  • Update the Docker Compose file with a MongoDB replica set install (I have no idea how to do that). Everything in that process is easy except MongoDB decided to force certificate authentication on replica sets with MongoDB 6 and up. So, all guides only work with MongoDB 5. If that’s okay with you, do that. I have made a “sidecar” for MongoDB that can handle this with any version of MongoDB, but it is only supported on Kubernetes and is not open source, so it requires an OpenFlow premium license.

Note: There are guides on how to make a scaled-out version of RabbitMQ on Swarm Mode. Remember you need a stateless one since you don’t have shared storage between the servers.


  • If you can make that work, you now have a completely scaled-out setup. Unless you have tried something similar, I think it would be easier to just install Portainer on the cluster and deploy the normal Docker Compose file and then increase the replicate count of API nodes. Then deploy two more MongoDB instances to the cluster and follow the guide on how to add the two independent nodes to the cluster. If you can, you should consider use something that allows replicate volumes between servers, like LINBIT SDS



  • On server 1, install Kubernetes.
  • On server 2, install Kubernetes.

You can follow the official guide, and then you need to decide what storage provider you want to use for persistent volumes. You also need to deploy Traefik. But if you want some more “hand-holding” and a nice web UI to manage the cluster, you could also use Rancher. The cool thing about Rancher is it comes with Traefik as an ingress controller by default, so you don’t have to deploy that yourself.

Note: This gives you the most flexible and scalable setup. It’s easy to extend to be used with many other services you might need. This is also the only way to get the full benefits of OpenFlow with a premium license (enhanced monitoring tools, enhanced orchestration of workloads, etc.). You can deploy OpenFlow using our Helm chart that handles deploying everything in a scalable manner. This will also give you the option to use snapshots for backup, which is important when your database becomes big since MongoDB + OpenFlow gets affected while a backup dump is running. With snapshots, there is zero impact on the database. This will require you to choose a storage provider in Kubernetes that also supports snapshots, like Ceph or Longhorn. OpenFlow can also be deployed on Docker installations, but unless you have a very beefy server, you would normally only do that to get access to Grafana and light monitoring. OpenFlow requires a premium license to be deployed on Kubernetes.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.