Directory/folder structure for Red Hat Enterprise Linux

Hey,

we’re currently working on implementing openRPA in our company. We haven’t yet decided if we choose openRPA or another RPA-Software. Therefore we want to test openRPA on a testserver. The problem is that our company is a liiiiittle overprotective with their security measures. When requesting a new server, we need to show the exact folder/directory structure of the software and where which part is saved in. Then we have to determine where we need which amount of disk space. We would like to try the whole setup with openFlow, NodeRed, openRPA on a Red Hat Enterprise Linux server with docker and kubernetes. Does anyone know how exactly the folder/directory structure will look like after the installation? Otherwise it would become hard to get the program installed because /opt and /var without sub-directories are very small.

My second question is, how much CPU and RAM do we need. I found some information in the requirements section, but it will scale up depending on how much it is used. Do you have any examples how much RAM and memory it needs after a few years of usage?

Thank you very much in advance!

Hey

  1. OpenRPA is a windows application, you cannot install that on linux.
    If you only need to do browser automation you can use other libraries from within agents running with openflow, like robotframework, Beautiful Soup, puppeteer etc.
  2. The whole point of docker is you do not need to know or worry about those things, since we are virtualizing the applications. So all your it guys need to know, is where they decide to store the images, by default for docker that is /var/lib/docker
  3. I’ve worked with two different massive enterprises that insisted on using redhat, none of them have ever managed to get docker to work on those. At one of them we ended up using a windows server ( that was painful ) at the other, we ended up deploying on openshift instead ( openshift is redhat’s attempt of making kubernetes more secure )
    Keep in mind, openflow requires a license to work on kubernetes/openshift. Feel free to throw me an email if you need a demo license if you want to test it out first.
  4. the resource requirements are impossible to tell, as it depends on how you use the system, but for OpenFlow i wrote a little about it here, for OpenRPA, it’s basically 400mb ram to “start it” and then each workflow you create/load and any data those workflows consume needs to be added on top of that.
    If you decided to use agents for browser automation, keep in mind, chrome is very memory hungry, i normally allocate at least 1 gigabyte memory per chrome enabled agent

I never expected to get an answer that fast. Thank you!

  1. We use Windows11 on our clients. So this shouldn’t be a problem.
    2/3. I wouldn’t say we are forced to use RedHat. If you say it works better on a WindowsServer, that is certainly possible! Does it need a specific WindowsServer Version? Or would you recommend one where it is running better than on others?
  2. Thank you! We better scale it a little higher for testing and see how much we need if we try automating something a little more complex.

hey

  1. good, but keep in mind, agent’s is also an option

for single server install ie docker.

No i would NOT recommend windows server
Ubuntu/derbian/nixos/fedora etc … anything else is better as long as you can get docker running on it :smiley: … I can help if it’s an OS i know but if you prefer redhat/suse etc. you will need to get help from them in how to make docker work. Once docker is running, it’s smooth getting everything up and running. ( guide here )

scalable install

I’ve used kubernetes on ubuntu for onprem installs, I’ve also seen some use rancher on windows, and i have some that use openshift on redhat. I also know one(maybe two?) that is running some pretty big production installs using docker in Swarm mode. But most companies i work with deploy it on kubernetes in azure or google cloud. I did a AWS install a few times and once in alibaba cloud too, but let’s be honest, kubernetes as a service generally just work, no matter who is offering it :slight_smile:
3. I’m a heavy advocate for scaling out. I like many small servers over one/two big ones. that is why it’s important for me to be able to run any part (or the whole stack when doing testing ) on a machines with 4 gigabyte memory.
So once it’s time for scaling you want kubernetes. I can assist on some types of install, but either way, once kubernetes, and traefik is installed, it’s super easy to deploy any combination of install using helm.

I don’t understand why microsoft server has been giving me so much trouble, getting docker to work with wsl on windows 10/11 is super smooth and works very well ( same on macos ). So personally i will not support it, but if your IT guys can make docker work in linux mode, you can make all the rest work without any issues. If i had too use windows, I would probably try and follow a guide for running minikube or rancher instead then.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.