How to allow multiple node-red agent to talk to each other?

Hi @Allan_Zimmermann,
Thank you for your amazing job with OpenRPA&Openflow!

I have a good base experience with docker I think. Like creating my own containers or installing Nextcloud / Node-Red / N8N with docker-compose etc. But I don’t have much experience with traefik.
This is why I’m hitting a wall.

Openflow is running here.
h.t.t.p://opc-x2/

From within Openflow I have added two Node-Red agents.
h.t.t.p://openrpa-nodered1.opc-x2
h.t.t.p://openrpa-nodered1.opc-x2

In both Node-Red instances I created a simple [HTTP IN] node as a GET /hello request
h.t.t.p://openrpa-nodered1.opc-x2/hello
h.t.t.p://openrpa-nodered1.opc-x2/hello
Both are accessible from my host machine. Everything works.

But when I try to access h.t.t.p://openrpa-nodered1.opc-x2/hello from within openrpa-nodered2 I get an error.

“RequestError: getaddrinfo ENOTFOUND openrpa-nodered1.opc-x2 : http://openrpa-nodered1.opc-x2/hello

Well I guest, both don’t know about each other.
So how to make then see and talk to each other?

The real main goal of my question is to be able to call h.t.t.p://openrpa-nodered1.opc-x2/hello from another computer on my network. But first I need to fix the question above.

h.t.t.p => Sorry, new users can only put 2 links in a post.

Thank you in advance!

My best guess is you did not use a prober dns ( public dns, or an internal dns server that both your machine and the machine with docker uses )
if you tried “cheating” and simply use the host file , that will not work

docker runs an internal dns server, so you can also access stuff by service name
So from your example … openrpa-nodered1 can reach openrpa-nodered2
but if openrpa-nodered1.opc-x2 is not registered in dns, openrpa-nodered1 cannot reach openrpa-nodered2.opc-x2
traefik allows loop back traffic, so my assumtion here is that *.opc-x2 points to traefik’s external ip ( in this case the host machine )

Thank you for you answer @Allan_Zimmermann !
But, it is still not working as expected.

Here is more information about the containers.

* Agent1
Name: openrpa-nodered1 (This is docker service name)
Slug: openrpa-nodered1
IP: 10.89.5.114
Public URL: openrpa-nodered1.opc-x2

* Agent2:
Name: openrpa-nodered2 (This is docker service name)
Slug: openrpa-nodered2
IP: 10.89.5.115
Public URL: openrpa-nodered2.opc-x2

Like you have mentioned, I removed the “.opc-x2” part of the URL so that the HTTP request uses the docker service name.
It worked!
Well I don’t have “getaddrinfo ENOTFOUND” anymore but another error.

The IP is Agent1’s IP, so the URL is pointing to the right container. This is good.
But the connection is refused on port 80.

# docker ps -a
IMAGE                                    STATUS                     PORTS                   NAMES
docker.io/library/mongo:latest           Up 6 minutes ago                                   openrpa-mongo
docker.io/library/traefik:latest         Up 6 minutes ago           0.0.0.0:80->80/tcp      openrpa-traefik
docker.io/library/rabbitmq:latest        Up 6 minutes ago                                   openrpa-rabbitmq
docker.io/library/mongo:latest           Exited (1) 6 minutes ago                           openrpa-mongosetup
docker.io/openiap/openflow:latest        Up 5 minutes ago           0.0.0.0:5858->5858/tcp  openrpa-openflow
docker.io/openiap/noderedagent:latest    Up About a minute ago                              openrpa-nodered1
docker.io/openiap/noderedagent:latest    Up About a minute ago                              openrpa-nodered2

That’s true, docker ps -a shows me agent1 and agent2 but no port 80 are beeing exposed for them.

Here is the docker-compose that I’m using.
I added “container_name” to all the services.
I changed the domain from “localhost.openiap.io” to “opc-x2”
I added the “networks” at the end of the file.

version: "3.3"
services:
  mongodb:
    container_name: openrpa-mongo
    image: docker.io/mongo
    restart: always
    command: "--bind_ip_all --replSet rs0"
    environment:
      - MONGO_REPLICA_SET_NAME=rs0
    volumes:
      - /opt/containers/openrpa/mongodb:/data/db

  mongosetup:
    container_name: openrpa-mongosetup
    image: docker.io/mongo
    depends_on:
      - mongodb
    restart: "no"
    command: >
      mongosh --host mongodb:27017 --eval
      '
      db = (new Mongo("mongodb:27017")).getDB("openflow");
      config = {
      "_id" : "rs0",
      "members" : [
        {
          "_id" : 0,
          "host" : "mongodb:27017"
        }
      ]
      };
      rs.initiate(config);
      '

  traefik:
    container_name: openrpa-traefik
    image: docker.io/traefik
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
    ports:
      - "80:80"
    restart: always
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"

  rabbitmq:
    container_name: openrpa-rabbitmq
    image: docker.io/rabbitmq
    restart: always

  api:
    container_name: openrpa-openflow
    image: docker.io/openiap/openflow
    labels:
      - traefik.enable=true
      - traefik.frontend.passHostHeader=true
      - traefik.http.routers.http-router.entrypoints=web
      - traefik.http.routers.http-router.rule=Host(`opc-x2`)
      - traefik.http.routers.http-router.service=http-service
      - traefik.http.services.http-service.loadbalancer.server.port=3000
      - traefik.http.routers.grpc-router.rule=Host(`grpc.opc-x2`)
      - traefik.http.routers.grpc-router.service=grpc-service
      - traefik.http.routers.grpc-router.entrypoints=web
      - traefik.http.services.grpc-service.loadbalancer.server.port=50051
      - traefik.http.services.grpc-service.loadbalancer.server.scheme=h2c
    ports:
      - "5858:5858"
    deploy:
      replicas: 1
    pull_policy: always
    restart: always
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    depends_on:
      - rabbitmq
      - mongodb
    environment:
      - auto_create_users=true
      - auto_create_domains=
      - websocket_package_size=25000
      - websocket_max_package_count=1048576
      - protocol=http
      - port=3000
      - domain=opc-x2
      - log_with_colors=false

      # uncomment below 2 lines, if you have set replicas above 1
      # - enable_openflow_amqp=true
      # - amqp_prefetch=25
      # uncomment to add agents to the same docker compose project ( will breake running docker compose up -d if any agents running )
      # - agent_docker_use_project=true

      - agent_oidc_userinfo_endpoint=http://api:3000/oidc/me
      - agent_oidc_issuer=http://opc-x2/oidc
      - agent_oidc_authorization_endpoint=http://opc-x2/oidc/auth
      - agent_oidc_token_endpoint=http://api:3000/oidc/token

      - amqp_url=amqp://guest:guest@rabbitmq
      - mongodb_url=mongodb://mongodb:27017/?replicaSet=rs0
      - mongodb_db=openflow

      - aes_secret=O1itlrmA47WzxPj95YHD2sZs7IchYaQI25mQ

networks:
  default:
    name: openrpa-net
    driver: bridge

Questions:
Q1. I thing I need to export port 80, but how? The only interaction I have with Agent1 and Agent2 is from OpenFlow.

many services don’t use port 80 ( but you see port 80 or 443 becource the trafic goes though traefik )
node-red for instance uses port 3000 ( and so does the api nodes in openflow )
( that is what the - traefik.http.services.http-service.loadbalancer.server.port=3000 label is telling traefik )
So if you want to “hit” the nodered nodes internally and not though traefik, you need to use the service name and port 3000

@Allan_Zimmermann Thank you for your answer.

That worked!
Using docker service name and port 3000 did the trick!

This is how Openflow is doing things! I learned a lot! Thank you.
Is there a page somewhere talking about how things work under the hood of Openflow?
It is very interesting!

The above is docker specific, so if you want to learn more about that, i can highly recommend watching some youtube videos about docker/kubernetes and reverse proxy ( like nginx and traefik )
There is not a lot, but there is a few topics covered about openflow here https://openflow.openiap.io/ ( like the Architecture page )

@Allan_Zimmermann Thank you!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.