Monday, November 14, 2016

Determining Secure Sockets Layer (SSL) Implementation Methods

mplementations that require Secure Sockets Layer (SSL) to be utilized must decide which method to use when implementing SSL. The three (3) most common methods are SSL Offloading, SSL Terminated at the Web Server, and Full SSL. There are additional options that can be selected in conjunction with one of these primary methods. Those will be discussed later.

The simplest and thankfully most common method for implementing SSL with the EPM Suite is SSL Offloading. Offloading refers to moving the SSL Certificates to a Load Balancer. This is a physical device that is used for load balancing network traffic to the EPM System. Offloading requires no SSL specific configurations within the EPMS Suite. Placing the SSL Certificates on a physical device also eliminates any performance concerns that can sometime arise from the use of SSL. It also simplifies maintenance associated with SSL Certificates within an organization by keeping all SSL Certificates in a single location.

The second option, SSL Terminated at the Web Server is more involved, however still fairly straight forward. By default most of the EPM System web components are accessed through OHS (Oracle HTTP Server) through redirects within the OHS configuration. Oracle is working to integrate the remaining components into OHS. The few that are not currently accessed, FDM for example can be added to the OHS configuration by modeling existing redirects. This creates a single point of entry for all EPMS web applications. SSL Certificates are assigned to the OHS web server and entries are added to the OHS configuration to direct inbound traffic to use SSL (HTTPS). By securing communication between clients and the OHS server(s) using SSL, and blocking direct access to the WebLogic deployed applications through Firewalls or ACLs, the desired SSL configuration is achieved. This configuration allows for non-SSL communication between the OHS web servers and the WebLogic or IIS EPMS web applications. However as all of this communication is server è server, there is much less exposure to security threats. Let’s face it, if someone can trace your backend network traffic you have bigger concerns!

The final option and by far the most complex is using SSL for both client è server communication, as well as backend server è server communication. In addition to securing OHS using SSL, all communication between OHS and the WebLogic and IIS web applications is also secured. This requires creating SSL certificates for each WebLogic server and each IIS web server. The SSL certificates must then be added to the WebLogic and IIS configurations. The EPM System must also be configured to use HTTPS for all internal communication. Given the ‘chattiness’ of the EPMS internal communications using Full SSL can have a significant performance impact. This configuration also adds to the complexity of supporting Oracle EPMS and increases maintenance as SSL Certificates are typically good for 2 years or less.

There are two additional options for using SSL within an EPMS deployment. The first is using SSL between Oracle EPMS Shared Services and the corporate External Authentication Directory, MSAD or LDAP. This is typically not a decision made by the implementation team or project, but is more of a corporate standard. This simply secures communications between the Foundation Server and the Active Directory or LDAP servers within the corporate domain. Traffic between the EPMS Components and their RDBMS server can also be secured through SSL. This is less common and requires more setup on the RDBMS side than with the Oracle EPMS configuration. The connection string given to the EPMS implementation team must contain the correct SSL parameters, and the SSL certificates must be added to the RDBMS clients for each EPMS server.

SSL is becoming more and more common within the Oracle EPMS landscape. If you’re considering implementing SSL with you EPMS deployment, please consider carefully your choices as your decisions will affect more than the initial configuration. Keep in mind other security holes in your environment and how to get the best security for your efforts.

A Possible Disaster Recovery Strategy for an EPM System


The client was already utilizing EMC’s SRDFA (Symmetric Remote Data Facility -Asynchronous) to replicate the EPMS Production DB files and logs to their QA environment, which doubles as their DR environment. The DR plan makes use of the HFM, Planning and FDM replicated DB files to recover the applications associated with those components. The only additional requirement was for the applications to exist in the QA environment with the same names as Production. This will ensure the Production applications are recognized by all other QA components in the event DR is triggered.
The client had also already established LCM migrations for Security and their EPMA components. The migrations were being used to push updates from QA to Production. Similar LCM migrations were created to reverse the process for DR. New LCM migrations were created for migrating Financial Reporting, Calc Mgr, and Essbase objects (Substitution Variables, etc). All of the LCM migration definitions will be scripted and automated to run nightly. The artifacts are exported to a NAS device, which is also being replicated to QA through SRDFA. In the event DR is triggered, the LCM migration objects will use LCM from the QA environment to import the Production Security, EPMA, Financial Reporting, Calc Mgr, and Essbase objects into QA.
The final piece to account for was Essbase. Their Production Essbase applications also exist in their QA environment with the same names. Similar to Planning and HFM, it’s critical that the applications exist in the QA environment with the same names. This ensures proper registration of the applications with Shared Services, and in the case of Essbase with the Essbase.SEC file. The Production Essbase applications are backed up nightly through a list of exports scripted in MaxL. They also take a weekly file system backup which is also replicated to QA through SRDFA. In a DR scenario, the Production Essbase objects (outline files, report scripts, etc) will be restored into the QA Essbase application folder structure, over-writing the QA objects. Level 0 data will then be loaded and aggregated.
Following these steps, along with additional steps for backing up QA, finalizing the Security and EPMA migrations and some minor tuning, the QA environment should be fully functional as their Disaster Recovery EPMS site.
Every EPMS DR implementation is unique and customized to meet a client’s specific needs and appetite for cost. We made good use for the effort and cost this client had already made in technology to create a fairly straight forward and easy to manage Disaster Recovery process for their Oracle EPMS implementation.
Let us help you with your DR Strategy!

OHS Troubleshooting on multiple servers

Large Environment OHS Configuration in 11.1.2.3

Running Oracle HTTP Server (OHS) on multiple servers helps environments run faster, be more stable and allows the environment to continue running if one server goes down. However, configuration and customization have always been made more difficult as a result. Every server added means repeating the steps and changes. When you run the Configuration Utility to make a change, it’s easy to forget you have to repeat it on every server running OHS.
A new feature in 11.1.2.3 helps with this. You have the option to use a shared drive to save OHS configuration files on. If you select this option, the files are shared by all OHS servers to that location. That means you make the change once and it’s reflected on all servers – you only have to restart services and the servers will all use the same information.
There are a couple of things to keep in mind if you want to use this feature, though:
  • It only works right if you select it the very first time it asks on the first OHS configuration. Trying to change it to a shared drive later doesn’t currently work. Oracle is aware of this limitation and reports they are working to address it
  • If you plan to “silo” your services via OHS configuration, this is not the option for you
  • After making a change on one server, you still need to restart the OHS service on all other servers for it to take effect
  • If you manually modify configuration files, keep in mind that all servers now use the same file, so if you add an entry for a local file system, it needs to exist on all OHS servers in the same location
This feature can help keep changes simple and centralized on large environment installs. This will help reduce cases where one server acts one way and another acts entirely different because of a missed configuration or typo in a file.

11.1.2.3.500 Adds Support in the Hyperion EPM Certification Matrix

The .500 PSU set for Oracle’s EPM version 11.1.2.3 adds support for latest Oracle RDBS version 12c, Internet Explorer 10, and Windows 8. Some things you need to know…Oracle released the latest version of its Oracle Database 12c last year. The Oracle EPM suite has not been certified on this release until now. With the .500 patch set Oracle EPM components are now supported with Oracle 12c, but with a few caveats.
Oracle 12.1.0.1+ is supported only when:
  • EPM 11.1.2.3 is patched to the .500 PSU level or later
  • It is only supported when used with WebLogic Server
  • Weblogic must be version 10.3.6 or later
  • If FCM and Disclosure Management are part of the installation Oracle Fusion
  • Middleware must be version 11.1.1.7 or later
Internet Explorer 10 has been around for a bit, and it performs better than legacy releases of the Microsoft flagship browser. The EPM suite has finally been certified to support Internet Explorer 10 with the 11.1.2.3.500 PSU applied. Support is offered when running Internet Explorer on Windows 7, Windows 8, or Windows Server 2008 OS platforms only. Don’t let your users upgrade to IE 11 yet, this version has not been certified yet.
As enterprises are refreshing their end user hardware, and expanding programs like BYOD, Windows 8 is becoming more and more prevalent in the corporate networks. This patch set adds support for Windows 8 when running the EPM clients like the Financial Reporting Studio. When looking to Windows 8 make sure your users are using Internet Explorer 10 though, IE 7,8,9 are not supported on Windows 8.
It looks like Windows XP has been completely dropped as a supported client platform in this patch set update. Although most enterprises have moved away from Windows XP, there may be some BYOD users still on the older Operating System so plan accordingly.
Many organizations have been waiting for support for these key versions for some time, and now EPM is ready to meet their expectations.

Deploying to WebLogic: Single Managed Server Deploy


When you do your first WebLogic deploy, you have the option to ‘Deploy the Java web applications to a single managed server.’ What does that mean and when should you use it?
Deploying to a single managed server means that all WebLogic components share a single WebLogic deployment instead of each being deployed to their own instance, complete with their own Windows service. They share a single Java engine, a single memory stack and a single WebLogic process. It means less memory, less CPU and fewer system resources are used.
Sounds great, doesn’t it? So why isn’t that just the way we do it? Why is it used only in rare circumstances?
All that resource ‘savings’ comes at a cost and with restrictions. First, a ‘single managed server’ is just that – ONE server. So if your environment will be larger than a single server with web components, you can’t use this option. There’s some exceptions to that (i.e. Essbase, and IIS components don’t count for this purpose), but anything tied to a WebLogic service has to be in one location.
Next, they all share the same pool of memory. While this means less total overhead, because you now only have one item instead of 10, it also means a “traffic jam” of sorts can happen. Take this simplified example:
Running a FR Report against a Planning application
-        User invokes from Workspace (WL item 1)
-        User rights to report and Planning are validated from Shared Services (WL item 2)
-        Reporting core accesses the report (WL item 3)
-        Planning is used to pull needed information (WL item 4)
-        FR Web presents the report (WL item 5)
All this happens for a single report. When these items are separate, each WL instance will spike to do the work as quickly as possible. When they share the same memory, it’s not a case of “you take your turn, then I get mine”. Instead they all want the memory at the same time. Not a problem for small reports or small loads, but as the reports get bigger and more run at the same time, we have problems.
A note on the example above. In 11.1.2.x, Workspace and Shared Services will share the same WebLogic deploy even if it’s not a single managed server. They both deploy together and this is the instance that is used if you decide to deploy to a single managed server.
Finally, when too many users need access to the system, a single managed server simply can’t handle it. Think of it this way – would you rather have 100 2-lane roads or one massive 20-lane road? There is a limit to the number of concurrent connections WebLogic can handle (no matter how well tuned). Once you get a significant number of users, you need more. It’s a case of 'many hands make light work.'
So while a single managed node works well for a sandbox or a small development environment, reducing the needed resources to run products, it isn’t intended for enterprise-level production environments.

Knowledge is Power: Training Users for Success

Whether due to limited budget or time constraints in project schedules user training often falls to the wayside during software implementation. In the interest of time users may receive informal on-the-job training that is just ample to complete their work. They may not use some product features to their fullest potential and be unaware of other useful ones. A more formal approach can avoid these and other issues. Following are several compelling reasons to invest more in training:

1.  Educate staff how to use the software.

Formal training is particularly critical if users are unfamiliar with the software. Training is also required as current product features undergo enhancements, new features are developed and knowledge is lost through employee turnover. Users at a minimum must gain familiarity with the key capabilities of each product directly applicable to their job duties.

2.  Differentiate systems.

Oftentimes companies employ a wide range of systems to facilitate business operations. Some tools offer similar capabilities so it is all the more important that users understand the purpose of each tool and when each is more appropriate. For example, users can view data in both Hyperion Financial Reporting and Smart View. However, Smart View is useful for ad-hoc queries and data validation while Financial Reporting is geared toward routine highly formatted reports and books for monthly internal and external reviews.

3.  Increase user adoption.

When users understand the benefits of a tool such as improved traceability of data and increased productivity they are more likely to accept and utilize it. Training can prove to be especially helpful when users need to discontinue use of prior systems with which they are more accustomed.

4.  Communicate current and new business processes.

Training should additionally address impacts to daily operations due to introduction of new systems and ever-changing regulatory compliance requirements. For example, do we need more technical staff to support ongoing system maintenance? What kinds of journal entries will we perform in the general ledger versus Hyperion Financial Management?

Get a Better Handle on Log File in Oracle EPM 11.1.2.3.x


EPM server administrators may have noticed that log files sizes have been growing lot faster in 11.1.2.3.x than in many of the recent releases. A lot of these log messages reference the JVM’s usage of the Oracle Wallet file. You’ll see messages like these:
OracleFileSSOWalletImpl.getWalletData: enter...
OracleFileSSOWalletImpl.getWalletData: System.getProperty(user.name)=DEMO$
OracleFileSSOWalletImpl.getWalletData: locking (shared) dummy sso file..
OracleFileSSOWalletImpl.getWalletData: locking (shared) sso file...
Oracle Wallet: wallet size 11061
OracleWallet: getSecretStore
OracleSecretStore: loading wallet from stream
OracleSSOKeyStoreImpl: engineLoad
OracleKeyStoreSpi: Loading wallet from stream
OracleKeyStoreSpi: Opening safe 0
OracleKeyStoreSpi: Opening safe 0
OracleKeyStoreSpi: found cert bag
OracleKeyStoreSpi: found cert bag
OracleKeyStoreSpi: found cert bag
OracleKeyStoreSpi: found cert bag
OracleKeyStoreSpi: found cert bag
OracleKeyStoreSpi: found cert bag
OracleKeyStoreSpi: found cert bag

While these messages might be helpful when trying to troubleshoot Oracle Wallet issues, we have seen that it’s extremely rare that there are issues with the wallet file. These messages become extraneous and simply clog up the log files. These messages are now getting logged (in 11.1.2.3.x) because Oracle added a new switch to their Windows service install scripts and their setCustomParams script (primarily used for Unix):  -Doracle.pki.debug=true. If you set this switch to false in the Window registry (and/or in the setCustomParams scripts) for each product, these messages will not get logged and the log files should stay manageable longer, without excessive growth.

Essbase Active/Passive Clustering - OPMN.xml Updates


Essbase allows for active/passive clustering of the servers for use in a failover or DR scenario.  Oracle uses OPMN to handle the failover from one node to another in the event of a failure. The EPM Config tool, used to configure ESSBASE, does not update the OPMN.xml file correctly and needs to be manually updated as a post install step.
Below are the edits needed to ensure failover of the nodes:
Manual edits of the /user_projects/{eps_instance}/config/OPMN/opmn/opmn.xml file:
1. Lines 4-8 “Notification Server” Sections
Original Entry:
   <notification-server interface="any">
      <ipaddr remote=“host002.corporate.com"/>
      <port local="6711" remote="6712"/>
      <ssl enabled=“true" wallet-file="/u01/Oracle/Middleware/user_projects/host002/config/OPMN/opmn/wallet"/>
   </notification-server>
Update to disable SSL (rarely used):
   <ssl enabled=“false" wallet-file="/u01/Oracle/Middleware/user_projects/host002/config/OPMN/opmn/wallet"/>
Add the following lines before </notification-server> 
   <topology>
<nodes list=“host002.corporate.com:6712, host004.corporate.com:6712" /> </topology>
Note: on the 004 node the order of the nodes is reversed
The updated section should like similar to this:
 <notification-server interface="any">
      <ipaddr remote=“host002"/>
      <port local="6711" remote="6712"/>
      <ssl enabled=“false" wallet-file="/u01/Oracle/Middleware/user_projects/host002/config/OPMN/opmn/wallet"/>
      <topology>
         <nodes list=“host002.corporate.com:6712, host004.corporate.com:6712" />
      </topology>
 </notification-server>
2. Update ics-component type (line 55 in my working file, line 52 in original unedited file)
Original Entry:
      <ias-component id="Essbase1">
      <process-type id="EssbaseAgent" module-id="ESS">
Update to the name of Essbase Cluster using in configtool:
      <ias-component id="TCEssP">
Add service failover entry:
      <process-type id="EssbaseAgent" module-id="ESS" service-failover="1" service-weight="100"> Note: on the second node the service-weight is “101”
The updated section should look similar to this:
      <ias-component id="TCEssP">
      <process-type id="EssbaseAgent" module-id="ESS" service-failover="1" service-weight="100">
3. Make sure this value reflects cluster name used in Configtool (line 80 in my working file, line 77 in original unedited file)
CONFIRM: <variable id=“ESSBASE_CLUSTER_NAME” value="TCEssP"/>
4. Disable auto agent restart (line 86 in my working file, line 83 in original unedited file)
BEFORE: <process-set id=“AGENT” restart-on-death=“true">
AFTER: <process-set id=“AGENT” restart-on-death="false">
After the xml files are updated you can test failover with this process:
Start Essbase on host 002 using {EPM_INSTANCE_BIN}/bin/opmnctl startall
Verify it started correctly {EPM_INSTANCE_BIN}/bin/opmnctl status
Processes in Instance: EPM_host002
---------------------------------+--------------------+---------+---------
ias-component                    | process-type       |     pid | status
---------------------------------+--------------------+---------+---------
TCEssP                               | EssbaseAgent       |   30407 | Alive
Start Essbase on host 004 using {EPM_INSTANCE_BIN}/bin/opmnctl startall
Verify it started correctly {EPM_INSTANCE_BIN}/bin/opmnctl status

Processes in Instance: EPM_host004
---------------------------------+--------------------+---------+---------
ias-component                    | process-type       |     pid | status
---------------------------------+--------------------+---------+---------
TCEssP                               | EssbaseAgent       |     N/A | Down
Stop Essbase on host 002 using {EPM_INSTANCE_BIN}/bin/opmnctl stopall
Verify it stopped correctly {EPM_INSTANCE_BIN}/bin/opmnctl status
>opmnctl status: opmn is not running.
Verify OPMN started Essbase on the host 004 {EPM_INSTANCE_BIN}/bin/opmnctl status
Processes in Instance: EPM_host004
---------------------------------+--------------------+---------+---------
ias-component                    | process-type       |     pid | status
---------------------------------+--------------------+---------+---------
TCEssP                               | EssbaseAgent       |   64812 | Alive

HFM Copy Application Utility Makes a BIG Return in 11.1.2.4.2


Since the release of HFM 11.1.2.4, and its change in architecture, many of the utilities that HFM administrators had come to know and love were deprecated from the product. One of the most popular of these utilities was the standalone Copy Application utility. This utility allowed you to do a direct, DB level copy of an HFM application from one environment to another, regardless of the version (you could upgrade the application after the copy) and DB type (in the most recent versions of the utility). It copied all application table contents between the environments, without a need to reload or re-consolidate data. All you needed were two UDLs created on the server that contained the utility, one pointing to the source HFM DB server and one pointing to the destination HFM DB server.
To compensate for this censure of functionality, Oracle added an “Application Snapshot” artifact to the Lifecycle Management export of an HFM application. The HFM Administrator’s Guide adds this note about this artifact:
Application Snapshot migration requires all users to be logged out of the application. The system logs out all users and shuts down the application if there are no active tasks present for the application. The Application Snapshot is exported at the end of the migration after processing other artifacts. When importing, the Application Snapshot cannot be selected with other artifacts; however, if the application does not already exist in the target, you must include the application definition artifact to create the application shell.
To reach similar functionality as the old Copy Application Utility, the Lifecycle Management export and import process of this Application Snapshot is a bit ‘temperamental’ in my experience with it. The downsides of this Application Snapshot artifact are:
  • Large applications require more time to export than you may be accustomed to
  • Requires a lot more space on the file system where the LCM export will be exported to
  • Requires a second import step that, potentially, could take just as long, or longer than, the export step
I’ve found that the only real benefit is that, since this is an LCM artifact, it is database type agnostic and you can export and import between different DB types (Oracle to SQL or vice versa). Converting applications to use different DB types is not something that happens every day.
With the release of HFM 11.1.2.4 PSU 200, Oracle has added in an “Import Application” program/utility built in to the ‘Consolidation Administration’ screen in Workspace. This new utility returns most of the old Copy Application utility functionality to you. It basically does a straight copy of the source application’s tables, much like the old Copy Application Utility did, and it does work pretty well. 
Note: If you are trying to copy an EPMA application, unlike the old Copy Application Utility, you will not receive a warning that you are trying to copy/import an EPMA application. With that, you will need to do the following, much like the old requirements:
  1. Run the utility, to copy/import the application
  2. Run the ‘Transform Classic to EPMA’ application wizard from Workspace
  3. Take an EPMA application LCM export from the source application and import it to the target environment
  4. Redeploy the application from EPMA, to re-sync metadata
An additional note for those considering using the new utility to upgrade applications from a previous 11.1.2.x release: I did test copying an HFM 11.1.2.3 classic application. It does seem to work fine. To verify that the application was up to date, I ran the ‘Upgrade Applications from Earlier Release’ task from the EPM Configurator on the HFM server. There were no issues.  However, I haven’t heard of a stance from Oracle on whether they will support using this methodology for an application upgrade or not, so, proceed carefully.


Here are the procedures for using the Import Application utility, against an Oracle DB. Prerequisite: The Oracle DBA must grant CREATE DATABASE LINK privileges to the HFM schema, in the target Oracle DB. Run the Oracle_Create_ImportApp_package.pck file, delivered with HFM 11.1.2.4.200, once, in SQL Developer:
HFM_Copy_App_Blog_Image_1.png
HFM_Copu_App_Blog_Image_2.png
After successful execution, the new package, HFMUTIL_PKG, will show up under the HFM schema’s available packages, in SQL Developer. If you see the green dot showing on the package, right click and click Compile. If the green dot is not there, there is no need to do this step:
HFM_Copy_App_Blog_image_3.png

Once properly compiled, the package will appear like this:
HFM_Copy_app_image_4.png
Create a database link to the source environment’s HFM schema on the source environment’s DB server, using the following syntax:
Create database link linkname CONNECT TO SourceHFMschemaname IDENTIFIED BY SourceHFMSchemapassword USING ‘//SourceDBHostName:SourcePortNumber/SourceServiceName’;
HFM_Copy_App_Image_5.png
Upon successful creation of the database link, login to Workspace and go to Navigate -> Administer -> Consolidation Administration, then click on Import Application. The Database link should be available and a dropdown list of the source environment’s HFM applications should be available.
HFM_Copy_App_Image_6.png
Select the Source Application you wish to copy, then enter the name you would like the application to be named in the target environment, as well as the description. Then select the HFM cluster and the Shared Services project that the application will be placed under. You can then select what data you would like to copy from the source application to the target, including Audit information, All Data or you can choose to filter what data to copy, based on a Scenario and Year combination.
HFM_Copy_App_image_7.png
Prior to execution, ensure that the application is in admin mode, and no users are working in the application, and then click ok.
HFM_Copy_image_8.png
You will then be taken to the Admin Tasks screen to monitor the import process.
HFM_Copy_App_Image_9.png

Upon completion, the application should be available and be identical in the target environment. Provisioning will then need to be completed, to the application, for the users/groups needing access. (Note: The source application should not have to be HFM 11.1.2.4.200, it can be 11.1.2.4.000 or above.)
Final thoughts
Oracle has also made available Patch 22046375, which is a Patch Set Exception for HFM 11.1.2.3.700. This is an updated version of the classic copy application utility which will work with HFM 11.1.2.4, much as in previous versions. This was the interim solution that Oracle provided prior to making this new UI based utility available. The strategic direction of Oracle is to use the new utility, but, the option is there to use either utility, for the time being. How long Oracle continues to support the classic utility remains to be seen, but, with new releases and patch sets coming in pretty rapid fashion, it’s best to be prepare for the future of only having the new utility at hand. Additional enhancements may come, to bring the new UI utility in to parity with the classic utility. If there is something missing from the utility, that you deem as critical functionality, open an enhancement request with Oracle and I’m sure they will look in to it. The uproar from customers, of totally losing the classic utility, prompted the addition of the new UI based utility, so, there is no reason to believe that Oracle would not listen to your request.

Using SSL Offloading with EAS to Avoid Load Failures


It’s not uncommon these days to load balance the HTTP layer for the EPM Stack. This includes using a virtual ip address or ‘VIP’. Many of our clients are also implementing SSL offloading at the VIP.  When using SSL, sometimes EAS will fail to load the EAS Console correctly. It will usually show an error with three possible bad URLs.
  • a URL that is not SSL
  • or the URL may not use the correct VIP address
  • or the URL may even be pointed to the default 10080 port
To force EAS to always call back to the correct SSL address, you can add a java argument for EAS “-DEAS_FE_URL.”
In Unix Environments:
The java argument should be added to the set CustomParamatersEssbaseAdminServices.sh (or bat). The file is found in the EPM_INSTANCE/bin/deploymentScripts/ directory. The argument can be added directly into “JAVA_OPTIONS” section of the file, or as its own entry. I prefer a separate entry so I can better see it has actually been applied.
I add these lines in the file:
They can be added at the top or the bottom of the file.
In Windows Environments:
The java argument should be added appropriately in the Windows Registry.

HYPERION ADMINISTRATOR NOTES


The list below is an example of things IT administrator of Oracle EPM should be prepared to do, track, and/or manage:
  • Monitor system performance with ‘Performance Monitor’ or some similar tool
  • Monitor Hyperion System Logs for errors and outage conditions
  • Manage Hyperion Services for up time
  • Manage complete system backups including the necessary MAXL scripting for Essbase backups (archive mode, etc.)
  • Manage or assist with management of all system RDBMS backups
  • Troubleshoot issues with both client and server systems
    • SmartView Client connections, FR Studio connections, HFM Client support
    • Essbase cube stop/start, HFM application start/stop, JVM monitoring and troubleshooting, IIS/Apache/OHS support and maintenance
  • Migration of objects between environments (Dev, Test, Production) using LCM, or manually for earlier versions
  • Manage the technical documentation of all fixes, versioning, and issues for the suite
  • Manage all technical tickets both with the internal ticketing system, and externally to Oracle or other vendors
  • Be prepared to apply patches to the environment including:(With the help of Networking team)
    • System level OS patches
    • Patches for third-party products (J2EE servers, PDF Print software, Microsoft Office, etc.)
    • Patches to the product suite which may include full product re installations
  • If required by IT policy, the management of users/groups within Shared Services and their proper role assignment may be a responsibility
  • Understand changes in application base/scope to ensure the architecture does not need to grow
  • Monitor all system usage metrics so that additional hardware is expanded in advance of need: RAM, Disk, Processor, etc.

Just about me

Hi all,

Thought of having one personal blog for sharing &  expressing my views about Oracle Hyperion EPM Infrastructure,
I am one among the thousands or more persons who love to work with the wonderful EPM tool.
 
I have started my career from Hyperion 9.3.x version to now the latest version 11.1.2.4.
 
With the infrastructure knowledge in,
Hyperion Essbase
Planning
Hyperion Financial Management 
Financial Reporting
Profitability and cost management
Strategic Finance
Data Relationship Management
Financial Data Quality Management and 
Financial Data Management and Enterprise Edition

I will be posting my findings about the this product in the coming days.

Happy reading & blogging.

- Ravikiran Reddy.

Smartview IE Timeout Settings

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]
"ReceiveTimeout"=dword:75300
"KeepAliveTimeout"=dword:2bf20
"ServerInfoTimeout"=dword:2bf20

The following steps should be done with the assistance of your Systems
Administration group. It is recommended a backup of the registry is done prior to
making any modifications.
On the client machine, update/add the following registry keys:
1. Open the Registry, Start -> Run -> Regedit.
2. Locate the following section(s):
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]\
or
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Internet Settings]
3.Create the following new DWORD keys with decimal values:
ReceiveTimeout=480000
KeepAliveTimeout=180000
ServerInfoTimeout=180000
In this example, the ReceiveTimeout setting is 8 minutes. The KeepAliveTimeout and
ServerInfoTimeout settings are 3 minutes. Set these to greater than the longest running
request.
4. Restart the machine for the new settings to take effect.

Oracle EPM 11.1.2.3 Patching Commands

HSS Patch

opatch.bat apply D:\Oracle\Middleware\EPMSystem11R1\OPatch\20675028 -oh D:\Oracle\Middleware\EPMSystem11R1 -jre D:\Oracle\Middleware\jdk160_35

Workspace Patch

opatch.bat apply D:\Oracle\Middleware\EPMSystem11R1\OPatch\20612400 -oh D:\Oracle\Middleware\EPMSystem11R1 -jre D:\Oracle\Middleware\jdk160_35

HFM Patch

opatch.bat apply D:\Oracle\Middleware\EPMSystem11R1\OPatch\20955781 -oh D:\Oracle\Middleware\EPMSystem11R1 -jre D:\Oracle\Middleware\jdk160_35

FR Patch

opatch.bat apply D:\Oracle\Middleware\EPMSystem11R1\OPatch\20838970 -oh D:\Oracle\Middleware\EPMSystem11R1 -jre D:\Oracle\Middleware\jdk160_35

R&A Framework

opatch.bat apply D:\Oracle\Middleware\EPMSystem11R1\OPatch\20768325 -oh D:\Oracle\Middleware\EPMSystem11R1 -jre D:\Oracle\Middleware\jdk160_35


HFM 11.1.2.3.701

opatch.bat apply D:\Oracle\Middleware\EPMSystem11R1\OPatch\21458469 -oh D:\Oracle\Middleware\EPMSystem11R1 -jre D:\Oracle\Middleware\jdk160_35


opatch.bat apply D:\Oracle\Middleware\oracle_common\OPatch\21240419\oui -oh D:\Oracle\Middleware\oracle_common -jre D:\Oracle\Middleware\jdk160_35

opatch.bat apply D:\Oracle\Middleware\oracle_common\OPatch\16810628 -oh D:\Oracle\Middleware\oracle_common -jre D:\Oracle\Middleware\jdk160_35

opatch.bat apply D:\Oracle\Middleware\oracle_common\OPatch\21240419\oui -oh D:\Oracle\Middleware\oracle_common -jre D:\Oracle\Middleware\jdk160_35


13866584 - ORACLE.WEBSERVICES.STANDALONE.CLIENT.JAR:DUPLICATE ENTRIES:LICENSE.TXT


Install Instructions:
---------------------

1. Unzip the patch zip file into the PATCH_TOP.
   unzip -d PATCH_TOP p13866584_111170_Generic.zip

2. Stop all servers (AdminServer and all Managed server(s))

3. Set the ORACLE_HOME environment variable to the JRF Home,
   for example "[MW_HOME]/oracle_common" directory.

4. Make a backup of the "oracle.webservices.standalone.client.jar" located in your JRF home,
   Say "MW_HOME/oracle_common" directory.
  
   For example:
   mv $ORACLE_HOME/modules/oracle.webservices_11.1.1/oracle.webservices.standalone.client.jar $ORACLE_HOME/modules/oracle.webservices_11.1.1/oracle.webservices.standalone.client.jar.BAK

5. Set your current directory to the directory where the patch is located.
   cd PATCH_TOP/13866584
  
6. Copy the "oracle.webservices.standalone.client.jar" from this patch to the target location.
   For example:
   cp oracle.webservices.standalone.client.jar $ORACLE_HOME/modules/oracle.webservices_11.1.1/oracle.webservices.standalone.client.jar
 
7. Start all servers (AdminServer and all Managed server(s)).

Sunday, November 13, 2016

Command to install ODI Studio for FDMEE 11.1.2.3

Command To install ODI Studio for FDMEE 11.1.2.3:

ODI Studio version: 11.1.1.7.0 (16471640)

Command: setup.exe -jreLoc  <Jdk location>

example: setup.exe -jreLoc C:\Oracle\Middleware\jdk160_35

EPM 11.1.2.3 Certification Matrix

Client Systems:
Windows Server 2008 with SP2+
Windows Server 2008 R2 (all SP levels included)
Windows Server 2003 with SP2/R2+, SP3+ 
Windows XP Professional with SP3+
Windows Vista with SP1+
Windows 7 (all SP levels included)
Windows 8
Windows 8.1


Web Browsers:
Internet Explorer 7.x
Internet Explorer 8.x
Internet Explorer 9.x
Internet Explorer 10.x
Internet Explorer 11 Enterprise Mode
Internet Explorer 11.x
Firefox 17+ ESR
Firefox 24+ ESR
Firefox 31+ ESR

Exceptions:

1. 1.6 GHz minimum processor is required.
2. To work on Internet Explorer and Firefox, Calculation Manager requires Adobe Flash Player v10+ to be installed.
3. WINDOWS EXCEPTIONS
- Windows Server 2003 with SP2/R2+, SP3+ is NOT SUPPORTED in 11.1.2.3.700.
- Windows XP Professional with SP3+ is NOT SUPPORTED in 11.1.2.3.500+.
- Windows Vista with SP1+ is NOT SUPPORTED in 11.1.2.3.700+.
- Windows 8 is supported only in 11.1.2.3.500+.
- Windows 8.1 is supported only in 11.1.2.3.700.
- For 11.1.2.3.500+ only, the FDM Workbench Win32 client does not support Windows 8 or Windows 8.1.
- For EPM System 11.1.2.3.500+ only, Windows Server 2008 includes certification for MS Hyper-V / Virtualization Windows Server 2008 and Virtual Desktop Infrastructure (VDI).
4. COMPATIBILITY VIEW MODE EXCEPTIONS:
- Financial Management supports Windows Vista in CompatibilityView mode only.
- Data Relationship Management does not support Compatibility View mode in Internet Explorer 8.x and Internet Explorer 9.x.
5. FIREFOX EXCEPTIONS:
- Firefox 17+ ESR is supported only for 11.1.2.3. Requires Remote XUL Manager 1.0.1+ Add-on, available at https://addons.mozilla.org/en-US/firefox/addon/remote-xul-manager/
- Firefox 24+ ESR is supported only for 11.1.2.3.500+. Releases prior to 11.1.2.3.700 require Remote XUL Manager 1.0.1+ Add-on, available at https://addons.mozilla.org/en-US/firefox/addon/remote-xul-manager/
- Firefox is not supported for FDM, although it is supported for FDMEE.
- Interactive Reporting does not support the Firefox browser.
6. INTERNET EXPLORER EXCEPTIONS:
- Internet Explorer 7.x and 8.x are not supported for Arabic.
 - Internet Explorer 10.x is supported only in 11.1.2.3.500+ and only in Windows 7.x, 8.x, and Windows Server 2008 R2 with SP1+ operating systems.
- Internet Explorer 7.x, 8.x, and 9.x are not supported on Windows 8 in 11.1.2.3.500.
- Internet Explorer 11 Enterprise Mode is supported only in 11.1.2.3.500. See MOS note "Policy for Supporting EPM System 11.1.2.2.500 and 11.1.2.3.500 with Internet Explorer 11 (Doc ID 1920566.1) for details and exceptions.
- Internet Explorer 11.x is supported only in 11.1.2.3.700+.
7. EXCEPTION:32-bit binaries only for: Interactive Reporting.
8. EXCEPTION:32-bit binaries only for: Firefox 45+ ESR.

Saturday, November 12, 2016

Essbase Administration Services (EAS) Hangs When Expanding Essbase Server Node


Symptoms

EAS hangs when expanding Essbase server node.

Cause

The Arborpath and Arbormsgpath environment variables were added on the EAS server and were pointing to an Essbase file location located on a different server. This was causing the EAS server to hang.

Solution

Remove the Arborpath and Arbormsgpath environment variables on the EAS server and reboot the system.

Set the Essbase Administration Services (EAS) Session Timeout Interval


Setting Analytic Administration Services session timeout interval.

Solution

The session timeout can be set in the web.xml file where the Essbase Administration Services Server component is installed.
For example, if EAS is deployed with WebLogic, the file is located in:
C:\Hyperion\AnalyticAdministrationServices\AppServer\InstalledApps\WebLogic\8.1\aasDomain\servers\aas\webapps\eas\WEB-INF\web.xml.
Modify the setting in the section:
<session-config>
<session-timeout>1440</session-timeout>
</session-config>

FDMEE Error: " Connection To ODI Agent Failed"


Symptoms

Installed FDMEE as a new installation and applied the 11.1.2.3.200 patch to the environment.
When attempting to test the ODI Connection information via the System Settings the connection test fails with error

"Connection to ODI Agent failed"

Cause

Currently the ODI Supervisor password stored in the 'oracle.odi.creadmap' credential store in Weblogic enterprise manager is not correct.


Solution

a) Access Oracle Weblogic Enterprise Manager as http://weblogicadminserver:7001/em
b) Expand FARM_EPM_SYSTEM > Weblogic Domain > EPM System
c) In the right-pane click on the arrow next to EPM system and choose "Credentials"
d) Expand out the 'oracle.odi.creadmap' and edit the SUPERVISOR entry
e) Update the password to SUPERVISOR
f) Log back into the weblogic administrator console and start the OracleDIAgent and it starts successfully.
 Credential Store

Friday, November 11, 2016

Hyperion Planning Error "Failure of Server APACHE Bridge Internal Processing Error" When Deploying Application

Hyperion Planning - Version 11.1.2.3.000 to 11.1.2.3.501 [Release 11.1]
Information in this document applies to any platform.

Symptoms

Hyperion Planning application fails to deploy or redeploy. The following error message is displayed:
Failure of Server APACHE Bridge Internal Processing Error

Cause

Directory /tmp/_wl_proxy/ is not accessible for the user account running EPM System services.

Solution

Make sure that the user accounts running the Hyperion services have access to /tmp/_wl_proxy/. If you have a couple of the Hyperion environments running on a Unix server make sure that each owner has access to this directory.
You may find that Environment 1 has access but Environment 2 does not, or it only has Read access.  As an example the /tmp/_wl_proxy/ had permissions defined as 750.  If both Environment 1 and Environment 2 are members of the same group change /tmp/_wl_proxy to 770 and try to deploy your application again. 
There is no need to restart the services after changing the permissions.

How To Customize WebLogic Server Flight Recorder Events

Applies to:

Oracle WebLogic Server - Version 10.3.6 and later
Information in this document applies to any platform.

Goal

How to customize WebLogic Server flight recording events?

Solution

There are two ways to archive this requirement.

1. Using JRMC

Step1. Start Flight Recording
jrmc1
Step2: Click 'Advanced'
jrmc2
Step3: Uncheck 'Show only template settings'. After that, WebLogic Server events will show up
jrmc3
Steps4: Select WebLogic Server events that you want to trace:
jrmc4
Step5: Start Flight Recording and test again. Only selected events are recorded.
jrmc5


2. Using JRCMD:

You could use following JVM option to specify the JFS template to use:
-XX:FlightRecorderOptions=settings=<file_location.jfs>
For example:
jrcmd <pid> start_flightrecording duration=30s settings=D:/wls_ejb.jfs compress=true settings=D:/wls_ejb.jfs
If JFS file is already copied to $JRockit_HOME\jre\lib\jfr, then could use the name directly. Like:
jrcmd <pid> start_flightrecording duration=30s settings=wls_ejb
To get the JFR recording, use following command
jrcmd <pid> dump_flightrecording id=<record id> copy_to_file=<jfr location>
 Or just stop it
 jrcmd <pid> stop_flightrecording recording=<record id>
There is not Utility tool to convert JRT to JFS. You have to write you own cutomization JFS template. For example, following JFS configuration will capture EJB_Business_Method_Invoke events only:
{
 "http://www.oracle.com/wls/flightrecorder/low/" :
   {
     "*" :
     {
        "enable" : false,
        "stacktrace" : false,
        "threshold" : 10ms,
        "period" : 1000ms
      },
     "wls/EJB/EJB_Business_Method_Invoke" :
     {
         "enable" : true,
         "period" : 0
      }
   }
}
You could try following steps to get the path definitions:

1. Start a flight recording task.
2. Issue following command to check the flight recording status
jrcmd <pid> check_flightrecording verbose
3. You will find some lines in output like:
...
http://www.oracle.com/wls/flightrecorder/medium/:
  wls/EJB/EJB_Home_Create : disabled threshold=-1
  wls/EJB/EJB_Home_Remove : disabled threshold=-1
  wls/EJB/EJB_Pool_Manager_Create : disabled threshold=-1
  wls/EJB/EJB_Pool_Manager_Post_Invoke : disabled
  wls/EJB/EJB_Pool_Manager_Pre_Invoke : disabled
  wls/JDBC/JDBC_Connection_Close : disabled threshold=-1
  wls/JDBC/JDBC_Connection_Commit : disabled threshold=-1
  wls/JDBC/JDBC_Connection_Create_Statement : disabled threshold=-1
...
4. Then you could refer to the output and modify your own JFS template.

Troubleshooting Java[tm]: Error Occurred During Initialization of VM: Could not reserve enough space for object heap

Java SE JDK and JRE - Version 6 to 8
Information in this document applies to any platform.

Purpose

This troubleshooting guide should help you to diagnose an issue that occurs right after you try to start a Java program; even when "java -version" quits.  Dependent on the Java version, the error message that you get is either this three line error message
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.

or a two line error message:
Error occurred during initialization of VM
Could not reserve enough space for object heap
or a four line error message:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

Troubleshooting Steps

1. Check whether -Xmx has been set explicitly or not (32 bit or 64 bit JVM)

If you don't specify -Xmx explicitly, the default selection for the maximum heap size could be implicitly set by the Ergonomics feature of the JVM that was added in JDK 5.
See also
http://www.oracle.com/technetwork/java/ergo5-140223.html
http://download.oracle.com/javase/6/docs/technotes/guides/vm/gc-ergonomics.html

The Ergonomics algorithm does not take the memory into account that is available on the system during startup. If a system has 8 GB of memory and ergonomics selects a default of 2 GB for the max Java heap size, the JVM could fail during initialization if the 2 GB cannot be reserved.

Example:
$ java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.


Solution:
Specify -Xmx explicitly. Make sure that there is enough memory available on the system for the value that you specify for the max Java heap.
java -Xmx128m -version

If you use JDK 6 you can also set the ErgoHeapSizeLimit JVM option - the maximum ergonomically set heap size (in bytes):
java -XX:ErgoHeapSizeLimit=128m -version


2. Check whether -d64 has been set (64 bit JVM on Solaris only)

If you want to use the 64-bit JVM on Solaris, both the 32-bit and the 64-bit JVM must be installed to the same folder. In order to select the 64 bit JVM, the -d64 option has to be specified. On Solaris you can use "/usr/bin/pargs <pid>" to check whether the -d64 option has been set for your process.
Example:
If you run the JVM on Solaris with:
java -Xmx4000m -version
and if you can reproduce the error, you are probably running a 32 bit JVM instead of the 64 bit JVM.
Solution:
Set the -d64 option explicitly in order to use the 64 bit JVM on Solaris.

Example:
If you run the JVM on Solaris with:
java -d64 -Xmx4000m -version
you won't be able to reproduce the error anymore.

3. Check whether the value for -Xmx  has been set too large (32 bit JVM only)

32-bit architectures limit the amount of memory any single process can allocate to 4 GB (theoretical address space is 2^32 bytes). However, there is memory overhead that the JVM and the Operating System use for a Java process. A Java process space consists of the Java heap, the Java permanent generation, the native heap, the threads, mapped files, loaded shared libraries and so on. If you run a 32-bit JVM, the maximum value for the java heap (-Xmx) of a Java process is much smaller than 4 GB actually.  The actual value will depend on the Operating System involved and what is actually running on the system.  A rough rule of thumb provides a maximum 32-bit process size of 3G for Solaris systems and about 2.5G for Linux systems.

There is a limitation of the Windows 32-bit Operating System involving maximum process memory allocation. Windows 32-bit processes can only use a max of 2 GB memory address space. Although Windows does allow a process size > 2 GB through its "Physical Address Extension (PAE)", the HotSpot JVM currently does not use this feature.

See also the MSDN article on Memory Limits for Windows Releases.

Examples:
If you run a 32-bit JVM on Solaris with:
java -Xmx4000m -version
or if you run it on Linux with:
java -Xmx3000m -version
or if you run it on Windows with:
java -Xmx2000m -version
you will be able to reproduce the error.
Notes:
  • If you want to perform any tests, test with a small Java program or with the -version option in order to initialize the JVM. Don't test with -fullversion since it won't initialize the JVM.
  • You can also get the two-line error message rather than the three-line error message if the value is very large, but smaller than the maximum representable size of the Java heap (e.g. "java -Xmx 4050m -version" on Solaris)

Solution:
Decrease the Java Heap Space value (-Xmx) to a reasonable value that takes all the other memory usage for the process into account or move to a 64-bit architecture (64-bit machine, 64-bit Operating System, and a 64-bit JRE).

Examples:
If you run a 32-bit JVM on Solaris with:
java -Xmx3500m -version
or if you run it on Linux with:
java -Xmx2600m -version
or if you run it on Windows with:
java -Xmx1400m -version
you won't be able to reproduce the error anymore.

Please note that in certain circumstances, even moving to a 64-bit environment will not help.  In environments where the resources available are managed, such as in a Solaris zone or an OS running under a hypervisor, the environment may not have more than 4G with which to work.  In these cases, you will need to ensure that your 64-bit environment is provided with the resources you need to successfully run your JVM. See also section 6.

4. Check whether the system is low on virtual memory (32 bit or 64 bit JVM)

Example:
If swap is exhausted and if you run a 64 bit JVM on Solaris with:
java -Xmx4096m -version
you will be able to reproduce the error.

On Solaris you can use the swap command in order to determine whether you have enough virtual memory available:
$ /usr/sbin/swap -s
total: 2519040k bytes allocated + 15706520k reserved = 18225560k used, 167896k available

In the example above, there are only 163.96 Mb virtual memory available, which might not be enough for starting another big Java app because the Java heap needs a reliable backing store (swap).
Notes:
  • If the system is low on virtual memory, you get the two-line error message rather than the three-line error message.
  • The error message does appear on 32-bit or 64-bit JVMs, when swap space is exhausted.


Solution:
Decrease the -Xmx value, don't run too many heavy processes on the system, and/or configure more swap.
Note:

It is ok if non-active processes are using swap space.  However, if actively used processes are constantly having their pages moved back and forth from RAM to disk based swap areas, performance will suffer dramatically. So if you configure more swap, take into account that it may also be necessary to buy more physical memory. See also Note:1262554.1.

Example:

If swap is not exhausted and if you run a 64 bit JVM on Solaris with:
java -Xmx4096m -version
you won't be able to reproduce the error anymore.

5. Check Limitations on the System Resources in the current shell (Linux and Solaris only)

On both Linux and Solaris the stack size is controlled by setting limitations on the system resources available to the current shell and its descendants.

On Windows platforms, the stack size information is contained in the executable file.

5.1 In which shell does the application run?

On Solaris, use ptree or pargs in order to check what shell you are actually using. The special shell variable $$ provides the shell's process id. Example:
$ /usr/bin/ptree | grep $$ | grep -v grep
  15520 /bin/csh

or
$ pargs $$
1131:     ksh
argv[0]:  ksh

In the fist example, you use the csh, in the second example, you use the ksh.

On Linux, use pstree. Example:
/usr/bin/pstree $$
bash---pstree

In the example above, you use the bash.

5.2 Check if the Virtual Memory Limit is set too small

Example:
$ ulimit -v
60000
ShellCheck
sh/ksh/bash ulimit -v
csh limit memorysize
tsch limit vmemoryuse


Solution:
Decrease the Java Heap (-Xmx) or remove the limitations on the system resources available to the current shell.
ShellSet
sh/ksh/bash ulimit -v unlimited
csh limit memorysize unlimited
tsch limit vmemoryuse unlimited


See also: man limit

5.3 Check if the stack size is set to unlimited (Solaris SPARC only)

Example:
$ ulimit -s
unlimited
ShellCheck
sh/ksh/bash ulimit -s
csh limit stacksize
tcsh limit stacksize

On Solaris SPARC setting the stacksize (not to be confused with the per thread stack) to "unlimited" assigns a 2 GByte limit to the stacksize of the process.


That leaves just 2 Gb for the Java heap, the Java perm generation, the native heap, the threads, mapped files, loaded shared libraries and so on. Those 2 Gb frequently won't be enough.


Solution:
for 32-bit JVM: ensure that stacksize is set to a reasonable value (e.g. 8192 kbytes, NOT "unlimited"). Example:
ShellSet
sh/ksh/bash ulimit -s 8192
csh limit stacksize 8192
tcsh limit stacksize 8192

6. Check Solaris Resources

6.1. Process control which limits the process size

Use prctl [pid] to check.


6.2. Check zone configuration which limits physical/swap

Use zonecfg to check.

7. Check for Linux kernel specific bugs

See also note:1625806.1 Java SE 7 Produces error "Could not reserve enough space for object heap" in RedHat 6 Servers with Linux Kernels 2.6.32-358.11.1.el6.i686 or Later

Monitoring the Planning JVM in EPM Environment

Hyperion Planning - Version 11.1.2.3.000 and later
Information in this document applies to any platform.
8.31.15

Purpose


The purpose of this article is to provide information on tools that are used for monitoring the Planning JVM. At times when you encounter slow performance or JVM crash the monitoring information is requested by support for troubleshooting and this KM will help you in gathering providing it to Support. The same principals can be applied to any JVM in the EPM System.

Scope

 The target audience for this KM is the Planning Server adminstrators who configure and maintain the Planning environment.
For discussion here we have Planning Deployed on JRockit.
We will use the Jrocket Mission Control (JRMC) and JRCMD and Weblogic Administration console to monitor Planning JVM.
There are many other tools to monitor the JVM that are not covered here.

Details

 

Java Thread Dump

A java thread dump is a snapshot that shows what every thread in the JVM process is
doing at a particular point in time
Important to note:
Thread dumps do not give solutions. They help identify problems and hotspots in running java applications

  • The server does not respond to new requests
  • Requests time out
  • Requests take longer and longer to process
  • The server is no longer reported as running

Refer to the Article Note 1098691.1Different ways to take thread dumps in WebLogic Server
Below are the  screen shots of taking thread dump using the Weblogic Console and Jrcmd tool.

Weblogic Console

Start the Weblogic Admin Server.
Logon to the WebLogic Admin Server console at http://server_name.example.com:7001/console
Click Domain Structure -> Environment -> Servers .
Click the name of the managed server, e.g., "Planning0"

Planning Server in WL Console

Under "Settings for Planning0", select "Monitoring" and then  sub tab "Threads".
Dump Threads Stacks
Start the test that demonstrates the performance problem.  While the test is running click the "Dump Thread Stacks" button.

Thread Dump
Copy and save the page to a new file " Planning0-ThreadDump.log ", starting with
===== FULL THREAD DUMP ===============
Collect the files from <DOMAIN_HOME>\servers\AdminServer\logs\AdminServer.log
<DOMAIN_HOME>/servers/Planning0/logs/Planning0.log.
<DOMAIN_HOME> is usually <EPM_ORACLE_HOME>\user_projects\domains\EPMSystem .
 Attach these files and Planning0-ThreadDump.log to the SR.
Take at least 3 thread dumps, around 10 seconds apart, collected when the server hang issue is observed

Jrcmd

A command line tool that sends the commands to a given JRockit JVM process.
 JAVA_HOME\bin\jrcmd.exe (Windows)
JAVA_HOME/bin/jrcmd.sh (Linux)
 jrcmd <pid> print_threads

jrcmd print threads

Reading the Thread Dump


Reading the logs
Different JVM Vendors display the data in different formats
• Markers for start/end of thread dumps
• Reporting of locks
• Thread states and method signatures
• However, the underlying data (stack) exposed remains the same
across vendors
• Thread dumps can contain lots of data
• Trying to read them in a text editor can be very tricky

Here we are using the  
Thread Logic to open the same log saved earlier planning

ThreadLogic

VM  Metrics

You can benefit from the recording’s code profiling that identifies heavily utilized classes or from the event profiling which quantifies time spent; for example, by garbage collection, blocked threads, code compilation.
The flight recorder can be used using the JRocket Mission Control and Jrocket Command  Here are the settings that need to be done for the Flight Recorder to work

Flight Recording With The JRockit Mission Control tool

This provides support for in-depth monitoring of the JRockit Java virtual machine’s environment and its performance as well as providing the ability for product support and developers to analyze runtime behaviors and diagnose failures. It provides out of the box ability to record metrics over a period of time and to preserve those metrics in a file which can be forwarded to support and then to development.
The JRockit Mission Control Client executable is located in JROCKIT_HOME/bin.
Otherwise, you have to type the full path to the executable file, as shown below:

JAVA_HOME\bin\jrmc.exe (Windows)
JAVA_HOME/bin/jrmc (Linux)


The following changes to the startup script setCustomParamsPlanning.sh (on unix or Linux) and the service registry to use the JRMC

-Djava.rmi.server.hostname=dxxx(hostname)
-Xmanagement:ssl=false,authenticate=false,autodiscovery=true,port=8888
Planning Service Registry
Once the Registry has been configured you can use the JRMC to connect to the Planning JVM
Planning JVm connection

Once you are connected to the Planning JVM you can use the JRMC to start the Flight Recorder.
Select the pertinent WebLogic server node, right-click and select Start Flight Recording.
JFR

Once the recording completes the flightRecording*.jfr file can be viewed.


From this information in the recording, memory usage, garbage collection, threading, etc. you can use the JRockit documentation found here:

http://download.oracle.com/docs/cd/E15289_01/index.htm

Memory and CPU usage

Monitoring the JVM  to see how memory, CPU usage, threads, and methods are being used during normal operation and to see how the JVM is stressed during specific testing is helpful to get a good idea about the health of the system

Console in The JRockit Mission Control tool

To view real-time behavior of your application and of Oracle JRockit JVM, you can connect to an instance of the JRockit JVM and view real-time information through the JRockit Management Console. Typical data that you can view is thread usage, CPU usage, and memory usage

jrcmd console

 Jrcmd commands


Memusage
onjectsummary

 Refer Oracle JRockit Mission Control Use Cases
 http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/appMCUseCases.html

Jave Heap Dump

Collect a heap dump at the time of the out of memory condition and provide the dump file to support for review.

On JRockit you can produce an out of memory diagnostic file by passing the following parameters to the setCustomParamPlanning.sh or the Windows registry

-Djrockit.oomdiagnostics=true -Djrockit.oomdiagnostics.filename=/home/oracle

Heap Dump
Note that the heap dump file will be a little larger than the maximum defined heap size so make sure that the  defined HeapDumpPath has enough disk space to accommodate the file(s).



In JRockit Mission Control go to the "Advanced" icon then the "Diagnostic Commands" tab and select and execute the "hprofdump" command.
The file is produced in
\Oracle\Middleware\user_projects\domains\EPMSystem\heapdump_Mon_Mar_24_08_41_16_2014.hprof

 JRMC

Reading the Heap Dump


A user friendly tool that can be used to open heap dump files is Eclipse Memory Analyzer which is also known as MAT. MAT can be downloaded free from the Web and works out of the box for analysis of JRockit and HotSpot heap dumps. Heap dump files from JRockit will have the name format jrockit_<process id>.hprof

Here  are some screen shots from the heap dump analysis done in Eclipse Memory Analyzer (MAT). Overview Tab of the Biggest Java Objects Retained by Size and the List of Actions and Reports:
esclipse