Execute package from within openRPA

Hi everyone,

I like the idea of using OpenRPA as a general workflow setup as this allows easy access to useful RPA tools and can also support a nice overview of the process.
On the other hand, I also like the adventages of executing packages in OpenFlow as this can support some awesome tools.

Lets say I have following setup
A - Machine with OpenFlow
B - Machine with OpenRPA

What I then want is to run an OpenRPA flow on ‘B’ triggered by the workitemqueue that has the specific worfklow associated with it.

This is perfect, but additionally I want a way to activate an agent that allows me to execute a Python program on the same machine ‘B’ which I have packed and published using the openiap-api in vscode.

The only possible way I can think of to achieve this is by creating an agent with the specific machine and add the agent and package to the workitemqueue config and then push a workitem to this queue inside the OpenRPA flow and wait for it to finish processing the workitem before continuing in the OpenRPA flow. but this also raises some issues as the agent cannot run before the flow has terminated, as the machine is “busy” essentially deadlocking itself.

So my question is:
Does anyone know of a way to achieve this or if this is even possible. it could also be from within NodeRED using the Invoke OpenFlow activity

If you have published the Python code as a package to OpenFlow, then you can run it in an agent (either a Docker image or a NodeAgent installed as a service or in the assistant running on a user’s desktop).

From your description, it sounds like you are asking to run the Python package from inside OpenRPA. If that is the case, that is not how it was intended, but I guess you could do that? If not, then I hope the below answers your question and use case?

I’m unsure about the case.

Do you want to run the two in sequence?

In that case:
Create two work item queues, one for running the OpenRPA workflow per work item and one for running the package on the agent (same machine in this case). You can then configure the first queue to “push” successful work items to the second queue once completed.

Do you just want to be able to run both at the same time?

In that case, nothing is stopping you. OpenRPA does not know about the NodeAgent on the same machine, so you can run an OpenRPA workflow at the same time as you run a Python script. (But if OpenRPA AND the Python script are trying to do RPA, that will conflict. In that case, you should consider the above.

Do you want to run two workflows in parallel on OpenRPA?

You have a few options.

  • You can use invoke code twice and not wait for completion to allow both to run, if triggered from a local workflow.
  • You can mark one of the workflows as a background workflow, this will make OpenRPA ignore it when trying to start a different workflow (but not the other way around).
  • You can update settings.json to allow more than one workflow to be run remotely

Thank you for your detailed response, Allan.

I would like the execution to happen sequentially. Here’s an example of my OpenRPA workflow:

The use case is to execute Python code triggered from OpenRPA.

As I’m new to agents, I’m not fully aware of their possibilities and limitations. From my understanding, a workitemqueue can trigger a package using an agent, but this configuration in OpenFlow seems static.

What I’m looking for is a more flexible setup, where a queue could dynamically run Python code on dynamically selected machines. However, if the Python workitem must execute on the same machine, it might deadlock itself at Step 2 in the workflow above.

My goal is to directly run the Python package on the same machine triggered by OpenRPA, as simply as possible.

I hope this makes sense. If you can think of a different approach, I’d love to hear your thoughts.

There is no direct way to activate a package on an agent from OpenRPA.
But you could use Invoke OpenFlow (this will call the workflow in node in NodeRED) and inside that, you can send a message to the agent.

But that feels “wrong”…
So when we are looking at your example workflows, it seems better to use 3 work item queues.

  • The first queue to run OpenRPA stuff
  • The second queue to trigger a package on an agent
  • The 3rd queue to do more OpenRPA stuff.

Thank you for the suggestion.

I feel that having multiple queues for each Python code can be a bit unnecessary. I’d prefer to work with a single workitem to keep things simpler, though I understand this might introduce challenges, especially with error handling—such as ensuring the workflow knows if Step 2 was successful before continuing to Step 3.

Let’s say the machine’s state must remain unchanged before Step 2 starts. How would you ensure the agent is triggered immediately after Step 1 in that case?

Also, I’m curious about the Invoke OpenFlow method you mentioned. Could you elaborate on how you would approach this in detail?

You need to think in transactions, or you will end up spending absurd amounts of time troubleshooting when things go wrong.

We have 2 types of queues:

  • Message Queues
  • Workitem Queues

Message Queues: are stateless, and messages expire rather quickly. They are meant as a fast and easy way to send a message to someone who is waiting and to notify you if they are not responding. If a message is dropped, it is your responsibility to “catch” that and handle it.
Workitem Queues: are “stateful”. OpenFlow uses ACID toward the database to ensure consistency in agents requesting workitems. They can handle basic retry logic and support both simple and very complex routing rules between the queues. For example, a process consists of one or more Workitem Queues. We add a queue if we either need to span domains (programming languages or computers) or if rollback in case of an error is more safely handled in chunks.

For example:
Remote Invoke OpenRPA and Invoke OpenFlow in OpenRPA use message queues.
RPA, AMQP consumer, AMQP publisher, AMQP exchange, and AMQP ack in NodeRED use message queues.

Using these should only be done if you either don’t care or know how to handle a message not getting through, or a receiver not sending a response back (in case something went wrong).

Back to your question about Invoke OpenFlow: If you add a workflow in and workflow out node in NodeRED and connect them, then on the workflow in node check “RPA”. Now you can select this “node” inside OpenRPA. This will send the payload to the node, and OpenRPA will wait until it receives a response (when you hit the workflow out node). If OpenRPA disconnects, if NodeRED restarts, or anything else goes wrong, OpenRPA will wait forever. Therefore, you should only use this for lightweight work, like calling an API. Not running code, doing long-running tasks, or “chaining” it with other messages.

It makes sense but in worst case you should still be able to wrap the invoke openflow in some time based error catching encapsulation like a Pick Branch with the invoke and throw business exception like this:

Additionally, how would I go about sending a message to the agent from NodeRED?

Alternatively the generic logic could just be moved into the package it self and have some kind of switch (if) to decide which of many python codes it is supposed to run based on information in the workitems payload.

A pick branch could be part of the solution.
The problem is, I never added a “timeout” on Invoke OpenFlow and Remote Invoke OpenRPA (there is a timeout property, but that is the timeout value on the message queue; if the message is received but you never get a reply, it will still hang forever).

Also, a pick branch could be dangerous depending on what you are actually doing. If you trigger something remotely, wait 2 minutes, and then throw a Business Rule Exception, the remote code is most likely still running, and you risk ending up in an inconsistent state. I would still recommend proper state management and using work item queues in a case like this.

It will work most of the time; I’m just pointing out the pitfalls.

Thank you, Allan. I really appreciate your insights.

I’ll likely go with creating a generic package that can be extended to support different Python code, all managed under the same workitem queue dedicated to Python. This seems like the safest and cleanest approach.

That said, out of curiosity, I’d still appreciate it if you could explain how it’s possible to invoke an agent from within NodeRED. Specifically, is it possible to pass information, such as the workitem ID, while invoking an agent from NodeRED? This way, I’d still be able to manage the same workitem seamlessly.

I have not “locked in” the message format for agent commands yet, so consider the text below a work in progress that could potentially change at some point. (but most likely it will not since it’s used in soooo many places already)
Also, remember that this is using the message queue, so I still believe you should use work item queues for running work on agents.

All agents listen on a queue with a name that consists of the agent’s slug + “agent”.
So, if you have an agent with the slug “pythonrunner,” you send a message to “pythonrunneragent”.

Every message should contain a property called “command,” and in this case, the command is “runpackage”
Runpackage allows the following properties (arguments):

  • id: _id of the package to run; this is mandatory.
  • queuename: queue to stream console output and results to. This is optional.
  • stream: true/false; this must be true if queuename is set, else false.
  • payload: an optional payload to “feed” into the package. This must be an object or a JSON string. If this is given, it will save the payload to a randomly generated filename and set an environment variable called payloadfile with the full path and filename. You can update this file during the run if needed, and the content will be sent back to the queue set in queuename.

If you set queuename, you will receive multiple messages back, but you should look for one with command=runpackage and completed=true. If success=true, you can get the console output in output and the file content in payload. If success=false, you will get the console output in output and the error message in error.

So, if you want to do this using the SDK from an agent or code you are running locally, like in VS Code, you first register a queue for receiving status commands:

const queuename = await client.RegisterQueue(
  {
    queuename: "" // empty string to use temp queue
  },
  (msg, payload, user, jwt) => {
     if(payload.command == "runpackage") {
       // do logic here, waiting for the result
     }
  });

and you trigger a package using:

const payload = {"message": "hi mom"}
await client.QueueMessage(
  {
    data: {
      command: "runpackage",
      id: packageId,
      stream: true,
      payload: payload,
      queuename: queuename,
    },
    queuename: "pythonrunneragent",
  },
  false,
);

The agent will send multiple messages, most notably, it will send this message if the package was successfully started

{ 
  "command": "runpackage", 
  "success": true, 
  "completed": false 
}

Or, you will get this message if it failed to run the package.

{ 
  "command": "runpackage", 
  "success": false, 
  "completed": true, 
  "output": output, 
  "error": "error message" 
}

Once the package has exited, you will receive a message in the format

{ 
  "command": "runpackage", 
  "success": true/false, 
  "completed": true, 
  "exitcode":number, 
  "output": "console output", 
  "payload": "payload if used" 
}

You can do the same in Node-RED; use the amqp consumer to listen on a queue (this will be a named queue and not a temp queue as in my SDK example, so you MUST update the permissions on the queue inside the MQ collection to allow the agent permission to publish to the queue after it has been registered), and you can publish a message to the agent using the amqp publish node.

1 Like

Thank you so much, Allan!

I really appreciate the detailed explanation and example. This gives me a much clearer understanding of how to approach the agent communication. I’ll take the time to go through this thoroughly and try to implement something similar on my end.

Thanks again for your patience and guidance!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.