Libraries and Templates in Orchestrator
Orchestrator can also be used to store libraries and templates so that they can be used again and again for reusability among other features that include automation and robot management. Orchestrator is a tool that all developers within a company can use to share development assets and make sure the right versions are being used.
Storage Buckets
Lesson covers Storage Buckets which provide a per-folder storage solution for RPA developers to leverage in creating automation projects.A Storage Bucket is created within a folder, so you can control access to it and its contents with fine-grained permissions and role assignment models.
What are Storage Buckets?
Storage Buckets are Orchestrator entities used for storing files which can be used in automation projects. UiPath Studio offers a set of activities to simplify working with Storage Buckets. These activities are available in the UiPath.System.Activities pack, under Orchestrator.
Storage Buckets can be created using the Orchestrator database or some external providers, such as Azure, Amazon, or MinIO. Each Storage Buckets is a folder-scoped entity, allowing fine-grained control over storage and content access.
Why do you need Storage Buckets?
UiPath Studio offers many options to work with files in automation projects. In certain contexts, Storage Buckets may be the best way: for example, when you need to use large files stored in a centralized location or when you need to grant access to multiple robots in a controlled way.
Queues
Orchestrator can be used to store items in queues that will enable them to be distributed individually to robots for processing. They will also monitor the progress of the items based on process outcomes and the progress of the process itself. When the queue items enter processing, they become transactions. These items are designed to be indivisible units of work: a customer contract, an invoice, a complaint, etc. Queues are extremely important for large-scale automation projects. They allow the storage and processing of basically unlimited amounts of data. The only condition is for the data to be organized in transactions. Queue item statuses are discussed.
What are queues?
Queues are containers that can hold an unlimited number of items. Items can store multiple types of data, by default in free form. If a specific data schema is needed, it can be uploaded at queue creation in the form of a JSON file.
Why do you need queues?
Working with queues is very useful in large-scale automation scenarios underlined by complex logic. Such scenarios pose many challenges—bringing items from multiple sources together, processing them according to a unique logic, efficient use of resources, or reporting capabilities at individual item and queue level, including the use of SLAs.
Consider the customer enrollment process through forms for a retail company: customers may come from different sources (online, partners, own shops, call center), thus having a smooth process of adding them to the processing line is crucial. Further on, working with queues will ensure that these are processed within the SLA time constraints.
Creating queues
Queues are easily created in Orchestrator from the entry in the menu with the same name. They are folder entities, which allow setting up fine-grained permissions.
When creating a queue, you set the maximum number of retries (the number of times you want a queue item to be retried) and the Unique Reference field (select Yes, if you want the transaction references to be unique). You can update existing queue settings, such as:
- The queue name.
- The Auto Retry option.
- The maximum number of retries.
In Orchestrator, newly created queues are empty by default. To populate them with items you can either use the upload functionality in Orchestrator, or Studio activities. Bulk upload is supported directly in Orchestrator, from .csv files.
Populating and consuming queues
To ensure an optimal use of the robots, queues are typically used with the Dispatcher-Performer model of running automations. In this model, the two main stages of a process involving queues are separated:
- The stage in which data is taken and fed into a queue in Orchestrator, from where it can be taken and processed by the robots is called Dispatcher.
- The stage in which the data is processed is called Performer.
Working with queues and queue items is done using the specific activities from the UiPath. System. Activities official package, under Orchestrator. These are:
Add Transaction Item
The robot adds an item in the queue and starts the transaction with the status 'In progress'. The queue item cannot be sent for processing until the robot finalizes this activity and updates the status.
Get Transaction Item
Gets an item from the queue to process it, setting the status to 'In progress'.
Postpone Transaction Item
Adds time parameters between which a transaction must be processed.
Set Transaction Progress
Enables the creation of custom progress statuses for In Progress transactions. This can be useful for transactions that have a longer processing duration, and breaking down the workload will give valuable information.
Set Transaction Status
Changes the status of the transaction item to Failed (with an Application or Business Exception) or Successful. As a general approach, a transaction failed due to Application Exceptions will be retried, and a transactions failed due to Business Exceptions will not be retried.
Add Queue Item
When encountering this activity in a workflow, the robot will send an item to the specified queue and will configure the timeframe and the other parameters.
Two of the properties that are worth highlighting are:
Deadline: add a date until which the items must be processed.
Priority: select Low, Normal, or High, depending on the importance of the items that are added by this activity and how fast you want them to be processed.
Within any given queue the transactions are processed in a hierarchical manner, according to this order:
Items that have a Deadline, as follows:
In order of Priority and
According to the set Deadline for items with the same Priority.
Items with no Deadline, in order of Priority and
According to the rule First In, First Out for items with the same Priority.
For example, a queue item that's due today at 7:00 pm and has a Medium priority is processed first, before another item that has no due date and a High priority.
Queue item statuses
A queue item can have one of the statuses below. These will be set automatically following human user and robot actions, and/or using the Set Transaction Status activity. Custom sub-statuses can be set for queue items which are 'In Progress', using the Set Transaction Progress activity.
New
The item was just added to the queue with Add Queue Item, or
the item was postponed, or
a deadline was added to it, or
the item was added after an attempt and failure of a previous queue item with auto-retry enabled.
In Progress
The item was processed with the Get Transaction Item or the Add Transaction Item activity. When an item has this status, the custom progress status is also displayed, in the Progress column.
Failed
The item did not meet a business or application requirement within the project.
It was therefore sent to a Set Transaction Status activity, which changed its status to Failed
Successful
The item was processed and sent to a Set Transaction Status activity, which changed its status to Successful.
Abandoned
The item remained in the In Progress status for a long period of time (approx. 24 hours) without being processed.
Retried
The item failed with an application exception and was retried (at the end of the process retried, the status will be updated to a final one - Successful or Failed.
Deleted
The item has been manually deleted from the Transactions page.
Apart from these above, queue items which have been abandoned or failed can enter a revision phase. In such a case, there are specific revision statuses, set by the reviewers. Check out the diagram below where both types of statuses are illustrated.
Transactions and Types of Processes
The Transactional Process is a project template based on a Flowchart, optimized for basic automation processes.
LINEAR
The process steps are performed only once. If the need is to process different data, the automation needs to be executed again. For example, an email request coming in triggers an automation which gathers data and provides a reply to the sender. The process is executed for each individual email.
Linear processes are usually simple and easy to implement, but not very suitable to situations that require repetitions of steps using different data.
ITERATIVE
The steps of the process are performed multiple times, but each time different data items are used. For example, instead of reading a single email on each execution, the automation can retrieve multiple emails and iterate through them doing the same steps.
This process implementation is done with a simple loop. But it has a disadvantage—if a problem happens while processing one item, the whole process is interrupted and the rest of the items remain unprocessed
TRANSACTIONAL
Similar to iterative processes, the steps of transactional processes repeat multiple times over different data items. However, the automation design is such that each repeatable part processes independently.
These repeatable parts are then called transactions. Transactions are independent from each other because they do not share any data or have any particular order to be processed
The three categories of processes can be seen as the maturity stages of an automation project. Starting with simple linear tasks, which go through multiple repetitions, and finally, evolve into a transactional approach.
However, this is not a rule for all cases, and the category should be chosen according to the characteristics of the process (e.g., data being processed and frequency of repetitions) and other relevant requirements (e.g., ease of use and robustness).
What it is?
The Transactional Process is a project template based on a Flowchart, optimized for basic automation processes.
What is a transaction?
A transaction represents the minimum (atomic) amount of data and the necessary steps required to process the data, by fulfilling a section of a business process. A typical example would be a process that reads a single email from a mailbox and extracts data from it.
We call the data atomic because once it is processed, the assumption is that we no longer need it as we advance with the business process.
What are some business scenarios in which I will use transaction processing?
Business Scenarios-1
You need to read data from several invoices that are in a folder, and input that data into another system. Each invoice can be seen as a transaction, because there is a repetitive process for each of them (i.e. extract the data and input it somewhere else).
Business Scenarios-2
There is a list of people and their email addresses in a spreadsheet, and an email needs to be sent to each of them along with a personalized message. The steps in this process (i.e., get data from spreadsheet, create personalized message, and send email) are the same for each person, so each row in the spreadsheet can be considered a transaction.
Business Scenarios-3
When looking for a new apartment, a robot can be used to make a search according to some criteria. For each result of the search, the robot extracts the information about the property and inserts the data into a spreadsheet. In this case, the details page for each property constitutes a transaction.
Understand the best practices of working with Orchestrator Resources
A few best practices includes
- To reduce SQL Server allocation contention in a highly concurrent environment, make sure you employ an optimal number of tempdb data files that have equal sizing.
- To mitigate the risks and potential impact from a malicious user, follow these guidelines:
- Limit robot permissions to the minimum required to execute the particular automation(s).
- In modern folders, disable robot creation for those users with administrator or other high-privilege roles in Orchestrator.
To mitigate the risks and potential impact from a malicious developer, follow these guidelines:
- Maintain control and validation over any packages being deployed in Orchestrator.
- Limit robot permissions to the minimum required to execute the particular automation(s).
- In modern folders, disable robot creation for those users with administrator or other high-privilege roles in Orchestrator.
- Change the default system administrator password (that was communicated to you by our team). You can do this by editing the user profile information.
- When you first log in to Orchestrator, do not select the Remember Me password. This helps you log out of the current session every time.
- While enforcing an HTTPS connection is important, it is just as important to have an SSL certificate from a trusted provider.
- It is recommended to add security caching directives to hide sensitive information that may be displayed in HTTP headers.
- Do not use the Orchestrator installation root directory or any directory that gets served by a web server
- Make sure the supplied folder or network share does not have any subdirectories or files containing sensitive information that should not be accessible to Orchestrator users or automations
- Do not use a full partition, such as C:\, as it might result in read access to unexpected data
- When possible, restrict the access to a subdirectory created specifically for storage buckets. For example, if you use C:\my_data to store all your data, you could create a subdirectory called storage_buckets, and then add C:\my_data\storage_buckets to the allowlist instead of C:\my_data
- Look for hidden files and folders before deciding on the allowed paths as they might contain sensitive data
- Use a specific folder in a network share instead of the whole network share (e.g. \\server.corp\sharedfolder)
- Do not allow system-specific folders, such as C:\Windows, C:\Program Files or C:\ProgramData, as they might contain sensitive information
- By default, the Elasticsearch security features are disabled if you have a basic or trial license. We strongly recommend that you enable them.
Comments
Post a Comment