It will be really usefull if you can pop concrete workitem that you want to work on:
client.pop_workitem(wiq, wiqid=“”, downloadfolder=“.”)
Something like this:
client.pop_workitem(wiq, wiqid=“”, wi_id=“”,downloadfolder=“.”)
SO, only difference is that when there is wi_id as argument, then pop that workitem.
I disagree, but I am open to a good argument on why this would be useful.
The whole point of work item queries is that they distribute the work to stateless agents. If you are supplying the ID (as my opinion until now), you are trying to bypass all the logic work item queues are there to provide. If you just need to read the content, you can always read the work item directly from the workitem
collection.
I was thinking like if there is a case where:
agent needs to process workitem on certain way only if there is some condition fulfilled (conditions can be based on other system)
If it is not, he would need to make the status retry, (but after certain amount of retries it will be failed (it is not possible to make it new because it is already in ‘processing’))
But if it is fulfilled, he will manage it successfully.
Or other way which will be good,
if I check for condition by querying workitems, I would be able to just pop that certain workitem and process it.
But you could easily do that by creating 2 queues…
On queue one, set a retry delay to something high, say one hour, and set the successful queue to queue number 2.
You pop the item off the first queue, check if the condition is true in the other system. If not, send it back with “retry”. Once the condition is true, set it to successful. Then it will go to queue 2, where you do NOT have a retry delay. This will get popped right away by the agent/robot that then does what needs to be done.
Yes, that’s the way out of this, but problem will be the error that will potentially run a lot of times until it reaches retries.
And second case is if i’m listing the workitems for me, and wanted to run the agent for specific one, the one that I select.
- You can change the retry count on each queue, so that should not be a problem.
- In that case, you are supposed to use multiple queues. It makes no sense using one queue where only specific items need to go to specific robots. Instead, create a queue per “group” of robots that is the same and add the items there.
The only case I’ve seen so far where the current system is a little “broken” was in this case: A company receives multiple big CSV files. For each line in the CSV file, they need to go through multiple processing steps. Once all items for one sheet have been processed, they need to “merge” them together into one big sheet. That last part was a PAIN to get right since we were working with millions of rows spread over multiple big CSV files. We created 2 processes for this.
- A split/joining process spanning 3 work item queues.
- A “handle each individual line” process spanning multiple connected queues. If an item did not get all lines completed successfully after XX amount of retries, an alert is then sent to an operator who can investigate and handle it.
Due to that, we now have a feature in our backlog to implement a way to create “virtual payloads” across multiple work item queues. We have not started that yet, but we hope to get there soon.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.