Thursday, October 02, 2014

SOA Suite 12c - WSM-02141 : Unable to connect to the policy access service

Today, I bumped into a problem with accessing the WSM Policy Manager after I changed the listen address of the SOA managed server in my development environment. The administration server was not able to contact the WSM Policy Manager after the change, causing the problem that I could not run any SOA service test from the EM console
oracle.wsm.policymanager.PolicyManagerException: WSM-02141 : Unable to connect to the policy access service.
After some investigation I discovered that, by default, the WSM Policy Manager is auto-wired in the domain agents and its default targeted to the managed servers and not to the AdminServer. This means that AdminServer uses an agent to connect to the policy manager.

OWSM uses cross-component wiring to auto-discover the Policy Manager in the domain. When you use the Configuration Wizard to create or update a domain that includes OWSM, Policy Manager URLs are published to the Local Service table. The OWSM Agent is automatically wired to the OWSM Policy Manager using the endpoint entries published to the Local Service table.
but that the WSM agent policy manager connection strings were not updated automatically in my environment after I changed the SOA managed server listen address in the WLS admin console. However, this should happen per documentation:

If, however, you change the domain using tools other than the Configuration Wizard (such as the WebLogic Administration Console, Fusion Middleware Control, or WLST), any changes to the Policy Manager URL are automatically published to the Local Service table but the OWSM Agent client is not automatically bound to the new URL. In this case, you need to manually bind the OWSM Agent to the Policy Manager URL. For more information, see "Verifying Agent Bindings Using Fusion Middleware Control".
 The OWSM agents in my environment were still using the old OWSM policy manager url for some reason. To fix my issue I had to go to:
  1. EM console to Weblogic Domain home page -> drop down menu panel -> Cross Component Wiring -> Components. 
  2. Select OWSM agents to go to the OWSM agent component configuration guide
  3. And re-bind the both t3 and http connections strings:

After rebinding both endpoints, the agent client configurations were wired again and the AdminServer was able to connect to the PM and I was able to run a SOA service test from the EM console :)

Please note that also when I target the ws-pm application to the AdminServer I still had to re-bind the agents to update the WSM Policy Manager url.

At least on my environment it looks like a manual rebind is needed in order to update the WSM Policy Manager url in the OWSM agents after a Listen Address of a server hosting the WSM Policy Manager has been changed.

Thursday, July 03, 2014

Oracle SOA Suite 12c tips - Tuning the SOA infrastructure thread pool

One of the new capabilities of Oracle SOA Suite 12c is the ability to control the SOA infrastructure thread pools, except the resource pools for EDN and the adapters, with Oracle WebLogic Server work managers. Each partition will has its own work managers defined. This allows you to separate services in partitions and, to some extent, tune them separately based on for example specific SLA requirements.

Well, this blog posting is not about explaining the SOA 12c thread pool concepts and all the knobs that you can use to tune the thread pools in SOA. That is extensively explained in the Oracle Documentation, so I am not going to repeat that here. What I want to highlight in this blog posting though, is how the SOADataSource impacts the SOA thread pool settings.

In SOA 12c the size of the SOA thread pools is directly controlled by the Maximum Capacity setting of the SOADatasource. If you change the default value of 50 to, lets say, 250, that will also change the Maximum Threads Constraint settings that should be bound by the number of SOA database connections. For example, if the SOADataSource is configured with a maximum of 250 connections this means that the SOAInternalProcessing_maxThreads constraint will be bound to 125. This corresponds to the the SOAMaxThreadConfig internalProcessingPercentage setting that is set to 50% by default.


SOAInternalProcessing_maxThreads Threads Constraint
SOAMaxThreadsConfig attribute

Having a direct dependency between the SOADataSource connection pool size and the SOA thread pools depending on availability of database connections to the SOA dehydration store, mitigates the risk that SOA runs out of db connections. It is therefore recommended, in most customer scenarios, to only tune the percentages in the SOAMaxThreadConfig configuration attribute or increasing the SOADataSource connection pool. Only dive into the workmanager configurations, such as the Fair Share classes and the thread constraints, if it is really needed.

Wednesday, June 04, 2014

Oracle Traffic Director: Instances, Processes & High Availability explained

Recently, I created a small slide deck to explain how Oracle Traffic Director instances, processes and high-availability concepts work together to front end requests to back end application servers with high availability.

The Oracle Traffic Director (OTD) environment the slide is based on runs on Exalogic Virtual and consists of 3 vServers:
  • 1 for the OTD admin server
  • 2 for the OTD admin nodes
More information about Oracle Traffic Director can be found here: Oracle Traffic Director documentation

Monday, December 02, 2013

Oracle SOA Suite 11g tuning tips for Oracle RAC database 11.2

During my project work I had to tune the SOA dehydration store on more than one occasion. Through this posting I would like to share the tuning tips collected during these exercises. It is not a step-by-step guide, because with tuning there never is a 'one-size-fits-all' in my humble opinion, but I provide in general a guidance that you can use as a reference for your own situation. I will also refer to related Oracle documentation where available.

The reference installation I have based my tips on is a 2 node SOA clustered environment connected to a 2 node Oracle Database RAC environment (non-Exadata).

Database settings

Database settings that worked that were proven to work best for my projects:






Db_cache_size *
0 or 1000m+


memory_target *

memory_max_target *

sga_target *
Automatic memory management
Sga_max_size *
Automatic memory management
0 (use dedicated services)

700 (must > JTA transaction time out)


Index partitioning for RAC

For tuning the SOA dehydration store for RAC I used the following Oracle document as a reference

For reducing index contention in RAC database I partitioned the following indexes (the default partition settings where sufficient for my projects)

Secure Files for optimized LOB storage

Oracle Fusion Middleware Service Oriented Architecture (SOA) Suite is a database intensive middleware system with multiple components that store many different types of data in the Database. During a single invocation of a composite, multiple inserts and updates of unstructured data like documents, messages, faults and payloads may take place. The amount of data in the Oracle SOA Suite database grows very quickly and this rapid growth is especially relevant for such unstructured data as it may affect not only the manageability of the database, but also its performance. Audit Trails, Business Decision Rules, Sensors, EDN and multiple other objects in the Oracle FW SOA Schemas make intensive use of unstructured data in lobs, clobs and blobs. SecureFiles is a feature introduced in Oracle Database 11g that is specifically engineered to deliver high performance for this type of unstructured data. (Source:

Beside other benefits Secure Files eliminates the infamous (HW) enqueue contention wait events with using Basic Files for LOB storage. In most cases for me the (HW) enqueue contention wait events were the main reason for performance issues with the SOA DB.

Additional settings:

DEDUPLICATION and COMPRESSION set to LOW (Note: requires additional DB Compression license)

PCTFREE to 20 for the following tables:
o    composite_instance
o    cube_instance
o    cube_scope
o    dlv_messages
o    dlv_subscriptions
o    xml_document
o    mediator_payload
o    mediator_case_detail
o    mediator_audit_document
o    audit_details
I did not experimented with table partitioning because without table partitioning performance requirements were already met on my projects. But if you want or have to get the most out of the configuration, I recommend to consider table partitioning based on the performance numbers provided in the SOA Secure Files white paper.

RAC patches on top of
I recommend to always use the latest patch bundle 4 on top of (latest non-Exadata patchset for 11.2) and install the following one-off patches to solve specific Secure Files issues:
  • 13787307
  • 13775960
  • 12614359

Tablespace separation 

As a best practice I also recommend to separate the indexes and LOB segments into separate table spaces. Especially on SOA 10g using Basic Files and RAC I have noticed a significant reduction of wait events when using separate table spaces for the indexes and LOB segments so I have reused this best practices when introduction secure files. Also from a space management perspective this gives benefits.

As a final note, I want to mention 2 things: 

1) I want to state again that the provided tuning guidelines in this posting serve as a reference not meaning that they will provide the most optimal performance in all least I can say that they probably won't harm.

2) Leverage AWR reports or the Oracle Enterprise Manager 12c to monitor & analyse DB performance during SOA performance testing. On many occasions they proved to be my best friend to find performance bottlenecks.