Turbo Integrator Chores remedy chores that the basic functionality of IBM Planning Analytics/TM1 isn’t well-designed to handle.
Planning Analytics/TM1 Basic Functionality
The classic PA/TM1 scheduling that suffices most desires is seen below in the Planning Analytics Workspace (PAW) view. You’ll discover that you can pick a start day and time to default the frequency to “daily.” Once enabled, the frequency will execute every day:
Additionally, you can further choose frequency via the frequency drop-down:
The setting above is pretty good. For example, you can pick a Friday and 7-day interval, and a chore will run weekly. Or choose a 6 a.m. and a 4-hour interval, which will run at 6:00, 10:00, 14:00, 18:00, and so on. You can also set up multiple chores to execute the same underlying processes at different times to achieve “run every 2 hours only during business hours”.
- Suppose you consider scheduling chores to run on a minute or second basis above. In that case, speak with an experienced PA/TM1 person to review the necessity. There are valid reasons to use this schedule, but those are rare. Often, we can avoid the perceived need by implementing a better design.
- We do not recommend executing multiple chores simultaneously—watch for different rolling executions that “sync up” occasionally! Running numerous chores together can be especially detrimental if the sub-processes edit data or metadata.
Tips and Tricks
Similarly, the Architect interface currently allows you the choice of running/scheduling in UTC or Local time. Further, the interface can schedule run times to the nearest minute and second, e.g., below at 7:38:54:
You might care about such precision scheduling to avoid contention or confusion in some external or internal contexts if you schedule the chore and outside events to run simultaneously. UTC scheduling is helpful when you need to coordinate system operations across many time zones. We expect this functionality will be explicitly migrated to the PAW (workspace) interface in the future.
Advanced Scheduling and Chore Parameter Control
Sometimes, you may need to run chores on a more complex basis:
- You close accounts on the 15th of the month. In that case, you execute the complete consolidation “process train” every morning before the business starts at 4 a.m. You also run at 1 p.m., so data is fresh in the local morning and after lunch, but stop the automatic execution & reload after that.
- Some chores must happen on the last or 5th day of the month.
- If you begin the 1st business day of the month, the upload of a new CSV file(s) must be checked every 4 hours until the 8th, at which time new European sourced files stop loading automatically at noon Eastern Time (North America), America’s source files at 5 p.m. and Asian files at 6 a.m.
- During some periods, you must give absolute priority to some chores. Otherwise, you can allow normal locking/unlocking, waiting, managed by the system to occur.
- Do not run a specific chore overnight between 6 p.m. and 5 a.m. Otherwise, it will run every 2 hours.
- You avoid the need for formal scheduling of chores to run one after another at specific times, especially when the execution time variability can result in contention in occasional exceptional (end of month, end of year) conditions.
- As an example, suppose a chore runs at midnight. Another chore that depends on the first chore executes 30 minutes later. While this typically works fine, the second will fail or partially execute if the first chore fails to finish. The result may leave the data or metadata in the system in a state where users are negatively affected.
In addition, and closely related, as chores run, it is helpful to pass their constituent processes different parameters that users may change or desire to change. Native chores can only pass “hardcoded in the chore” parameters.
The bad news is that PA/TM1 does not have this as native functionality. However, the good news is that we can program it using a combination of existing “internal to” TM1/PA techniques.
How to Organize Chores & Processes
While there are ways to “reach into” TM1 from external scheduling applications and using an API to schedule the running of turbo integrator processes, namely chores, there are two ways to run a series of functions in a specific order:
- You can add them to the chore in a specific order using pre-arranged parameters when you create the chore if the processes require them.
- You can use one process to execute sub-processes and pass variables.
- You can also “mix and match” the above.
Development Note: Chore Calling Processes
Suppose you run chores to “commit all changes at the end of the chore” or “commit at the end of each process.” As the changes hold until the very end, changes to data or metadata made by an earlier process may not be available to later processes. Since it follows multiple commits, each process executes and commits one after another. That means that all subsequent processes will have prior changes available. Generally, we advise multiple commits.
Development Note: Processes Calling Processes
When processes call other processes (within chores and on their own), that is significantly different. In this case, each process commits its changes upon exit of the epilog section. If a calling process runs multiple child processes, the sibling processes do not see each other’s changes, and no commit occurs until the “mother process” terminates. Consequently, unexpected results can occur if succeeding process logic depends on changes in a previous chore section. It is essential to carefully consider what information you use in the logic and ensure the “commit” events occur.
The CUBESAVEDATA command forces a “commit to disk” event for the specific cube. You can potentially use this command to avoid the event, as long as you understand the locking implications and the time it takes.
Advanced Scheduling Implementation
Advanced scheduling requires process code that evaluates logic. In that case, you set up a chore, and one or more of its constituent processes evaluate logic. A common way to help with this is to set up a chore “control cube” that contains and exposes the necessary control data.
The heart of the control logic to achieve flexible, easily controllable, and maintainable logic is to externalize the control parameters from processes. You can use a chore to save /enter “hard-coded” parameters at run-time. However, this requires editing the chore, which is somewhat error-prone and difficult to see. The same is valid with hard-coded process logic.
A much better practice is to set up a control cube with the following dimensions:
- An Index Dimension—Elements: ‘Index1’, ‘Index2’, ‘Index3’, etc…)
- A Measure Dimension—Has various elements as needed, including comment field(s).
- A Chore Dimension—This can be }Chores dimension, but it is a better practice to have a system-specific “<MyChoreNames>” dimension, as not all chores will need complex controls.
Control Cube Example
In this example, the “NightlyChore” shows a comment to help the admin know which real chore might be referred to and other information. Since many chores run for months with minimal attention, these memory aids can help administrators or power users charged with upkeep.
Another comment might be the exact process name in the chore that the index is referenced. Again, this is primarily a convenience to users so they can rapidly identify what chore index drives which process.
This example also has generic parameter names, so a set of processes can be controlled from such a cube. These names can imply designing processes with parameters option like “CELLGET parameter = yes” as an option, so the process knows to look to the control cube and ignore hardcoded parameters.
Typical Commands for Advanced Scheduling
Hints and Tips
- Set On and Off times as parameters in the control cube: Use the NOW() function to get local (serial) system time and TIMVL (or TIMST) to convert and extract the hour.
- To escape unwanted code execution, use PROCESSBREAK or CHOREQUIT, ITEMSKIP.
- Create multiple chores. It is often easier to create two chores that execute the same processes once a day at different times. Conversely, it is more difficult to run one chore every hour, checking for “is it time yet?”. Also, this creates fewer logging entries.
- Set up a Control cube that indicates the day of the month the automatic “data pull and refresh” stops. Use TIMVL (NOW(), ‘D’) to check the day. “Business day” is often more logical, but given the complexity associated with knowing that, it can be easier/faster for a user to set the correct calendar day each month.
- For the last day of the month, in the control cube, use cube/process rules to count down one day from the beginning day “1” of the following current month in serial time. Similar process updated/driven logic can update the control cube to determine things like the day of the 3rd Friday in a month. The process commands: FormatDate, NewDateFormatter, and ParseDate are helpful. We recommend offloading this to a time attribute cube.
- Processes that load files should also test for ‘file existence’ (FILEEXISTS command) and move now-loaded files to an archive directory (“Batch files” work well.) As a best practice, all file paths and names should be parameters in the control cube.
A Note on Priority
The proverbial “800lb gorilla” of joke infamy would be the commands ENABLEBULKLOAD () and DISABLEBULKLOAD(). They suspend all other PA/TM1 activity on the service. That pauses users, stops logins, and scheduled chores will not run if they occur during the interval, etc. For most purposes, this is overkill. However, the commands could trigger this activity in dynamic systems where mission-critical operations must occur.
- In a “softer” priority where known conflicts might occur, “run-semaphores” can be helpful to force processes to wait on others. Click here for more information.
- Synchronization via the SYNCHRONIZED function (driven by a control cube for the sync objects) is also possible to force serial execution.
With correct implementation and design, we can accommodate almost any client’s desire for advanced schedules in PA/TM1. However, complex schemas should be carefully thought through and tested.