Great thanks!
Oh, you have updated āAgents Pageā in documentation.
Good job with the doc.
Few questions regarding Document:
Is there way to see packages history, and to revert to previous version?
From Doc: Agent Page, header: 'Agent Capabilities'
'The agent will watch for package updates and automatically kill any packages running and update them when needed. ā
How itās possible to send output to other agents or some OT collector?
From Doc: Agent Page, header: 'Agent Capabilities'
āThe agent can also send the console output from any package to other agents or the web interface for easy remote monitoring. The agent can send detailed performance and monitoring data to an OpenTelemetry collector.ā
Does this means that Agent is creating website path where I can see what the bot is automating on web interface? And is it possible to interact (clicks and keyboard inputs) or interrupt Agent within that page?
From Doc: Agent Page, header: 'Agent Runtimes'
ā When we do that, you can decide if any packages that will be running on the agent need to expose a web interface. If so, it will create an ingress route to the agent using the slug name. For instance, if you create an agent and it gets assigned the slug name dawn-cloud-223c
and you are accessing an OpenFlow instance running at app.openiap.io
. Once a package starts exposing a website/API, you can access that on https://dawn-cloud-223c.app.openiap.io.ā
Maybe I could create image for automations using rcc?
From Doc: Agent Page, header: 'Agent Runtimes'
āBut you are free to create your own images if you have special demands. For instance, if it takes a long time to install all dependencies and you need fast startup times, it can be very handy to have a separate image with all those pre-installed.ā
Thanks for this clarification. I can access documents to work with, that are on my machine, only with the desktop/machine Agent (ether through Assistant or with daemon), and not with the ones from Docker (run through OpenFlow).
With Assistant it is attended type, with daemon its unattended bot.
From Doc: Agent Page, header: 'NodeAgent / Daemon'
'Running an agent as a daemon is a handy way to run agents/packages that need access outside Docker/Kubernetes. If you are running OpenFlow in the cloud and need to run code on-premise, or if OpenFlow is running on a separate VLAN, you can install an agent on a machine where a package needs access to data not all packages need access to. ā
This is a really good use case to point out!
From Doc: Agent Page, header: 'NodeAgent / Daemon'
āAnother use could be access to special or more powerful hardware. Most things in OpenFlow require almost no resources, so it can be beneficial to offload heavy workloads to hardware you have already purchased, to save on cloud costs. Or maybe you need to run heavy Machine Learning or LLM training/inference, and decided to rent GPUs at a different cloud provider like RunPod, or need access to Google TPUs for TensorFlow workloads, but want to keep your Kubernetes cluster somewhere else or in a different region.ā
In the example that you have (GitHub - openiap/rccworkitemagent), there is conda.yaml and also requirements.txt for packages that are necessary to install for python. As I understood req.txt is going to be deprecated, and all packages would need to be init in conda/environment.yaml?
From Doc: Agent Page, header: 'Python'
āThis way, you can define both the version of Python you want and whatever packages are needed for your Python projectā¦Older project examples did not use an environment.yaml. If the agent does not find one but sees a requirements.txt file instead, it will call pip install -r requirements.txt in the project folder. Please do not depend on this system, as this is deprecated; itās just documented here while all example projects are being updated.ā
Would love to make examples of these use cases!
From Doc: Agent Page, header: 'Assistant'
āCommon use cases would be loading/processing files, generating reports, or simply creating a library of handy scripts that can help the users be more productive. As in all other cases, you can easily share, update, monitor, and control each agent and packages.ā
Would love to make examples for all of those use cases too, and post it in git.
From Doc: Agent Page, header: 'Agent Capabilities'
āThe agent will watch for package updates and automatically kill any packages running and update them when needed. The agent can also send the console output from any package to other agents or the web interface for easy remote monitoring. The agent can send detailed performance and monitoring data to an OpenTelemetry collector. The agent can do port forwarding to other agents for easy remote debugging/troubleshooting. The agent can function as an extension of a work item queue and automatically execute packages for each work item added to the queue. This makes the process fully automated, and you simply need to read an environment variable to get and read and, if needed, update the filename containing the payload.ā