Agents automating Web applications

Hi everyone,
I have started playing with the Agents, and I really like it.
I have tried using the Agent for Robocorp bots, that is automating some web application. I know that there is agent type something like node-chromium for browsers automations in OpenFlow.

Question:
Does this kind of Agents can automate web applications in background without interrupting my work? Or I need to run this package by my computer Agent (meaning Assistant app developed by Allan), in order to automate browser application?

Packages can run in either

  • docker ( docker or kubernetes or openshift )
  • nodeagent ( a background service running on either windows/macos/linux )
  • assistent ( runs inside the desktop of a user on either windows/macos/linux )

The assistent runs a the user at the desktop, the two others are for unattended work. So to answer your question, yes and no … If you don’t want to get interrupted you should be running the package on an nodeagent or inside docker.

Just to clarify. Packages are code you write, and then the nodeagent-package finds the correct runtime and runs it for you at request. if you need access to a browser you can set chromium = true in package.json this will only allow running packages where chromium (or chrome) has been found
This is why there is two images for docker … one with chromium and the default without

It’s funny should mention this, i was just going through all the examples and docker files and found there was an issue with robotframework crashing chrome so is just about to upload a fix for that
You cn see the robotframework example here. And if you prefere robocorp/rcc you can see an example here.
I would not recommend using the rcc version unless you need to run workflows that only works with the rcc … it uses conda that is PAAAIINNNNFUULLYY slow, compared to micromamba that i use for everything else.

1 Like

hmm, so while testing the rcc part, i see they are NOT using conda anymore, or they are simply detecting it exists and using that instead.
Sorry, my bad
I wonder why it’s still slow then …

1 Like

Yes, I have run from the OpenFlow with that agent (node and chromium agent), I suppose it runs inside Docker. So, As I understood Agents inside Docker can do web automations?

Nodeagent is started from command line (explained here: GitHub - openiap/nodeagent).
One question regarding this: As I understood I would need OpenFlow instance (to connect) in order to run it as NodeAgent (Exapined here: Agent Quick Start Guide | OpenIAP Documentation)? ( Packages are stored by the OpenFlow and this is why they need to connect to it)?

Regarding rcc i have also came to some error during running that example above you mentioned. But I was thinking that rcc is superior to rf (not sure), and already had some process from their exercises.

yes, you can do web automation from docker
yes, nodeagent requires openflow
Maybe i was not clear, sorry about that. rcc takes a weirdly long time to setup the environment, but once it’s complete, rcc executes rpa workflow just as fast as many other solutions, including robotframework. So i simply suggest you see if you can avoid rcc if needed, but you have the option to choose what works best for you.

Try reading this, may that makes it more clear

1 Like

Great thanks!
Oh, you have updated ‘Agents Page’ in documentation.

Good job with the doc.
Few questions regarding Document:

Is there way to see packages history, and to revert to previous version?

From Doc: Agent Page, header: 'Agent Capabilities'

'The agent will watch for package updates and automatically kill any packages running and update them when needed. ’

How it’s possible to send output to other agents or some OT collector?

From Doc: Agent Page, header: 'Agent Capabilities'

‘The agent can also send the console output from any package to other agents or the web interface for easy remote monitoring. The agent can send detailed performance and monitoring data to an OpenTelemetry collector.’

Does this means that Agent is creating website path where I can see what the bot is automating on web interface? And is it possible to interact (clicks and keyboard inputs) or interrupt Agent within that page?

From Doc: Agent Page, header: 'Agent Runtimes'

’ When we do that, you can decide if any packages that will be running on the agent need to expose a web interface. If so, it will create an ingress route to the agent using the slug name. For instance, if you create an agent and it gets assigned the slug name dawn-cloud-223c and you are accessing an OpenFlow instance running at app.openiap.io . Once a package starts exposing a website/API, you can access that on https://dawn-cloud-223c.app.openiap.io.’

Maybe I could create image for automations using rcc?

From Doc: Agent Page, header: 'Agent Runtimes'

‘But you are free to create your own images if you have special demands. For instance, if it takes a long time to install all dependencies and you need fast startup times, it can be very handy to have a separate image with all those pre-installed.’

Thanks for this clarification. I can access documents to work with, that are on my machine, only with the desktop/machine Agent (ether through Assistant or with daemon), and not with the ones from Docker (run through OpenFlow).
With Assistant it is attended type, with daemon its unattended bot.

From Doc: Agent Page, header: 'NodeAgent / Daemon'

'Running an agent as a daemon is a handy way to run agents/packages that need access outside Docker/Kubernetes. If you are running OpenFlow in the cloud and need to run code on-premise, or if OpenFlow is running on a separate VLAN, you can install an agent on a machine where a package needs access to data not all packages need access to. ’

This is a really good use case to point out!

From Doc: Agent Page, header: 'NodeAgent / Daemon'

‘Another use could be access to special or more powerful hardware. Most things in OpenFlow require almost no resources, so it can be beneficial to offload heavy workloads to hardware you have already purchased, to save on cloud costs. Or maybe you need to run heavy Machine Learning or LLM training/inference, and decided to rent GPUs at a different cloud provider like RunPod, or need access to Google TPUs for TensorFlow workloads, but want to keep your Kubernetes cluster somewhere else or in a different region.’

In the example that you have (GitHub - openiap/rccworkitemagent), there is conda.yaml and also requirements.txt for packages that are necessary to install for python. As I understood req.txt is going to be deprecated, and all packages would need to be init in conda/environment.yaml?

From Doc: Agent Page, header: 'Python'

‘This way, you can define both the version of Python you want and whatever packages are needed for your Python project…Older project examples did not use an environment.yaml. If the agent does not find one but sees a requirements.txt file instead, it will call pip install -r requirements.txt in the project folder. Please do not depend on this system, as this is deprecated; it’s just documented here while all example projects are being updated.’

Would love to make examples of these use cases!

From Doc: Agent Page, header: 'Assistant'

‘Common use cases would be loading/processing files, generating reports, or simply creating a library of handy scripts that can help the users be more productive. As in all other cases, you can easily share, update, monitor, and control each agent and packages.’

Would love to make examples for all of those use cases too, and post it in git.

From Doc: Agent Page, header: 'Agent Capabilities'

‘The agent will watch for package updates and automatically kill any packages running and update them when needed. The agent can also send the console output from any package to other agents or the web interface for easy remote monitoring. The agent can send detailed performance and monitoring data to an OpenTelemetry collector. The agent can do port forwarding to other agents for easy remote debugging/troubleshooting. The agent can function as an extension of a work item queue and automatically execute packages for each work item added to the queue. This makes the process fully automated, and you simply need to read an environment variable to get and read and, if needed, update the filename containing the payload.’

Oh, I learned something new today. I did not know about the details tag :smiley:

packages history

No, sorry. I fear(ed) it would fill up the database to much, so for now it is simply overwriting the existing file ( Or, it’s upload a new file, updating the file id in the package definition and deleting the old file )

Getting package output

When you click the
image
icon next to an agent, you can see the console output of each package running on that page. I don’t have a page documenting how, but you can see how i implemented that inside RunPackageCtrl controller, if you want to get access to that though other means than the webpage.

Monitoring using OT collector.

So most of my premium customers use OT collector to monitor the system. It’s kind of what Application Insights in azure is, but based on open source tools and can run on-premise or your own cloud. If you reference the OTEL module/package in your own code, you too, can send information to the OT collector you have. If you don’t have that infrastructure up and running, it might be easier to use some Software as a service platform that offers this type of functionality like Azure, or datadog

where I can see what the bot is automating

I think your mixing things up here. In the above i explained how you can see the output. That is console output, not desktop automation.
The is no builtin way to monitor OpenRPA robots or assistants running in a desktop somewhere. If you need that i can highly recommend Remotely or rustdesk or if you want to go commercial something like anydesk

Paths/web inside agents

When an agent is running in docker/kubernetes it’s common you would run something that exposes an API or web server. Like Node-RED or Grafana or ML Studio. For that to work, you first need to enable “web” on the agent, and then you need to configure the package to tell the agent what ports you are listening on. Currently that is a little “messy” but it will soon be tiered heavy to the package config. When that happens I will also add supported for sharing the same port for multiple packages … For that to work that will ofc require either unique path’s or domain names for each package. Since this has not been created yet, I have not decided on how that will be implemented, so right now, you can only expose ONE package, per agent as a website

Maybe I could create image for automations using rcc?

If my images does not fulfill your requirements, yes you can create your own. I do not allow custom images on app.openiap.io but you can easily change the list of images you will offer, on your local installation of openflow.

Assistant it is attended type, with daemon its unattended bot

Agents is meant for any type of code/applications, not only RPA. But if you are looking at this with RPA glasses, that is a correct way to differentiate them.

rccworkitemagent

As i state earlier, I’m/was going though the different projects and clean them up ( also stated on the documentation page )
so yes, it used to have both, now it only has a conda.yaml file.

Agent Capabilities

I’m not sure how you would create an example of that, without it being super generic. I do get people generally better “grasp” how things work by seeing examples, so i definitely see the use case for examples, but i’m soooooo bad at creating those. So I will depend more on the community help create those.
I hope to also start creating more youtube videos again, and i will be covering the non-code parts of that, in some of those.

1 Like

Ahahhaha, I have just tried to not create too long post, and be simpler to see different questions. :smiley:

Thanks for those clarifications!
It’s now so much clearer for me about the Agents.

I’m currently a little bit confused with the OpenFlow Agents page where I’m starting the Agent, and also where I’m running the more instances of them.
There is one instance that is in the page of creating agent and scheduling (which is running package immediately), and the ‘Edit’ page of the Agent where I can run more of them with selecting package, running, and restarting Agent.

For those Use cases I meant that I could possibly be the contributor and create some of them, basic ones, based on my current knowledge.
I’m really fascinated by the Robocorp, how they have created examples/use cases for their tutorials.
Maybe I could try creating something similar for one use case just to test how it would be.

Add agent’s page

Used only to add an agent to docker or kubernetes

Add/Edit agent’s page

Also allows configuring schedules of packages.

  • if package has daemon: true, it will only allow you to add it and create custom environment variables, since it will be running all the time.
  • if package has daemon: false, you can set a CRON schedule on how often you want the package to run on the given agent. And create custom environment variables

“run package” page ( the icon i reference in above post )

This is where you can run ad-hoc package and also monitor all running packages.

It might be confusing, but in my head it all made perfect sense when i made it like that. After it was made and people started creating it, I get feedback on, and also personally find it a bit confusing weather an agent is running in docker, as a daemon on a remote machine, or inside an assistant. I tried making it more clear with the buttons at top, but still.
Also, i really miss an reliable way to see if an agent is actually running. Right now i can only see that for docker instance, beside that iI only know if user is online. I will need to update the signin code everywhere to improve this, and I really don’t like making to make changes to fast, so right now it’s going to stay that way a little more.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.