With the EPMA Batch Client, a little Perl scripting and a dash of SQL, you can easily automate the maintenance of your Oracle EPM metadata.
EPMA to the Max
Since the 126.96.36.199 release, EPMA has started gaining more traction and as existing customers upgrade to 188.8.131.52+, many are choosing to convert “classic” applications to EPMA. Most who have heard about EPMA are aware of the benefits it offers for sharing metadata across EPM applications, but not as many are familiar with the opportunities it provides for automating metadata maintenance. To illustrate the robust level of automation that can be achieved with EPMA, I would like to outline an automated metadata management solution that is based largely on a real-world solution that was implemented at one Oracle EPM customer more than two years ago. That solution has automatically processed over 30,000 metadata updates since it was put into Production. Let me start by sketching out the landscape:
Oracle EPM Applications
- Oracle Hyperion Financial Management
- Primary Production Application
- 2-3 “Archive” Applications
- Oracle Hyperion Planning
- 4-6 Production Applications
- Oracle Essbase
- 6 Essbase BSO applications
- Essbase BSO applications using Essbase Integration Services
- Oracle eBS
- Capital Asset System
- Update metadata in EPMA for the following types of changes in the source systems: new members, hierarchy changes and alias/description updates.
- An application should not include a new member unless the member is required in order to load actual data from the source systems. (i.e., the process should not add unused members to an application).
- The latest actual data from the source systems should be loaded to the applications, ensuring there are no kick-outs.
- All applications should be updated, redeployed and available for users by 6:00 A.M.
Below is a high-level overview of the process that was built to accomplish the above objectives:
- At a scheduled time, both data and metadata from the source systems was refreshed to tables in the enterprise data warehouse.
- Concurrent with the data warehouse update, a query was executed against the EPMA repository to retrieve list of the members, hierarchies and aliases for all dimensions in the EPMA Shared dimension library.
- At this point, a series of custom SQL operations was executed to compare the information extracted from EPMA to the latest metadata in the source systems. Each difference was logged in a change history table, along with an indicator of the action required to correct the difference (e.g., a member move to correct a hierarchy discrepancy).
- Once all of the comparisons had been performed, a custom Perl script was invoked to read the records from the change history table and create an EPMA Batch Client file with the appropriate syntax and required properties for each type of change.
- The EPMA Batch Client was invoked and the file created in the previous step was passed in with the expected result of bringing EPMA in sync with the source system metadata.
Note: At this stage in the process, if any new members were created in EPMA, corresponding Batch Client commands were executed to exclude those members from all applications including the dimension.
- Once the Batch Client finished processing, the EPMA repository was re-queried to extract the updated metadata from the Dimension Library and the SQL comparisons were re-executed to ensure that all differences had indeed been corrected. If any differences were noted, an additional attempt was made to invoke the EPMA Batch Client to correct them. Any differences remaining after the second attempt were logged to be included in an email notification at the conclusion of the process.
Note: The update process allowed for custom hierarchies in EPMA that did not exist in the source system by only reviewing the parent-child relationships from EPMA where the parent existed in the respective source system.
At this stage in the process, the EPMA Shared dimension library matched what was in the source system metadata. Because new members created during the process were automatically excluded from the appropriate application views, the only changes affecting the applications at this juncture were the hierarchy and alias updates (unless the alias had been previously overridden at the application view level). The next stage in the process was intended to prepare the applications for receiving the most recent data from the source systems:
- In the data warehouse, a series of SQL scripts was executed to build the table(s) and/or view(s) containing the source system data appropriate for each application. Each application had a different set of dimensions and most contained data for only a portion of the Entity hierarchy, so separate database objects were created for each.
- Once the data warehouse objects containing the latest source system data had been created, a query was executed against the EPMA repository to return the list of members that were included in each application view for each dimension.
- Custom SQL statements were executed to compare the list of unique members appearing in the tables/views containing the data for each application to the list members that were currently included in the application.
- Any members present in the data tables/views but not present in the application view were noted in a change history table so that they could be unfiltered (included) in the application view. Additionally, custom SQL was used to check to ensure that the member’s parent existed in the application. If not, the custom SQL would traverse up the hierarchy (from the Shared Dimension library) to determine which ancestor(s) would need to be included in order to be able to include the missing Level 0 member.
- A custom Perl script was executed to read the change history table and read the list of members that needed to be included for each application and dimension; the script then generated the EPMA Batch Client file to process the include requests.
- Once the EPMA Batch Client was invoked, referencing the file(s) generated in Step 11, each application view should contain all of the Level 0 members that appear in the data warehouse tables/views containing the latest source system data.
- As with the Shared Dimension Library updates, a check was performed after the EPMA Batch Client finished, ensuring that every Level 0 member in the tables/views now appeared in the respective application views. (The goal of this process was to guarantee there would be no kick-outs when the data was loaded in a later step. The process worked as expected and kick-outs from data loads were extremely rare in the automated process).
- A query was executed against the EPMA repository to return a list of application views that were now showing as out-of-sync with the latest deployment. Any out-of-sync applications were redeployed using the EPMA Batch Client.
- Now that the applications had been updated with the latest metadata, the latest source system data was loaded from the data warehouse tables/views created in Step 7.
Note: For Essbase and Planning data loads, a calculation script was executed prior to the load to clear the existing data for the POV(s) included in the data warehouse tables/views and another script was executed after the data load to perform any aggregations.
- As a final step in the process, custom code was executed to read the change history table to get a list of the transactions affecting each application. These were then parsed into separate files for each application and emailed to the designated functional and technical owners for each application.
The process may sound complicated and indeed it was (is); however, the payoff from this automation was substantial. Subsequent to the implementation of this automation, there were several substantial changes to hierarchies for various dimensions in the source systems. No effort whatsoever was required to update the Oracle EPM metadata to match (unless you count waiting until the following morning). I hope this post has given you an idea of how a completely lights-out metadata update process can be written using EPMA.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine there are over one hundred logins in the source server and you need to migrate them all over to the destination server. Wouldn’t it be awesome if we could automate the process by generating the scripts for the required tasks?