Sunday, June 13, 2010

OSB 11g - JCA polling adapter: StuckThread trace in the server log

I recently had to configure in OSB 11g the JCA AQ adapter to poll for new messages on an AQ inbound queue. This is quite easy using this tutorial.

However after I had configured the adapter and run some successful tests I saw the following stack trace in the server log after a period:

<[STUCK] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "600" seconds working on the request "$WorkAdapterImpl@4b936059", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace: Method)
weblogic.jdbc.wrapper.XAConnection_oracle_jdbc_driver_LogicalConnection.dequeue(Unknown Source)
weblogic.jdbc.wrapper.JTAConnection_weblogic_jdbc_wrapper_XAConnection_oracle_jdbc_driver_LogicalConnection.dequeue(Unknown Source)$

Apparently, this StuckThread trace occurs 10 minutes after the server has been started and is caused by the fact that WebLogic uses by default 1 polling thread which remains active, or in other words is never released by the adapter, by design. Therefore the stuck thread trace can be ignored.

Please look in the OSB JCA transport guide for a detailed explanation and the configuration steps that can be applied to prevent stuck thread traces for polling threads (use a special configured Workmanager)

Wednesday, June 02, 2010

Coherence in Oracle Fusion Middleware 11g: useful tips & links

Since the release of Oracle Fusion Middleware 11g the integration of Coherence in the Fusion Middleware has increased drastically with every PatchSet release. Currently in FMW 11g PS2 Coherence is used for:

  • Cluster Deployment

  • Weblogic ActiveCache/Coherence*Web integration

  • OSB ResultCache

This posting will contain useful pointers to documentation section that can help you with managing and using Coherence in FMW 11g...the list is not complete but a first attempt to bundle the links I have used before and which were/are very helpful for me.
Coherence Network and performance tuning

  • Configuration tips fo local laptop configuration - Thanks Marc;

  • use the tangosol.coherence.ttl (time-to-live) to prevent network traffic to different hosts - useful when working locally on laptops.

  • use the tangosol.coherence.localhost to force Coherence to bind to a specific address

  • use the tangosol.coherence.wka to force unicast cluster communication. Determine the wka addresses and the number of wka addresses carefully to avoid loss of service in case the wka nodes die.

  • for the OSB the coherence cache and cluster configuration files can be found in /config/osb/coherence
  • and can be changed accordingly.
  • When encountering Coherence socket buffer warnings in the log files it is useful to increase the socket buffer size of your OS. Please have a look here to find the settings per OS

  • Coherence performance tuning

  • In my opinion it is wise not to use the Managed server JVM for cache storage in case your cache will grow to a significant size. Configure the Managed Server JVM with local-storage false and off-load storage to other cache-servers in the Coherence cluster running in separate JVMs

Active Cache - Coherence*Web

I will update this post when I have more useful information that can be shared. In case of Coherence related questions feel free to send me an email.

Sunday, May 30, 2010

Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid

The IT book publisher PackT asked me, based on my blog activities here, a couple of weeks ago if I was willing to review the newly published book Coherence 3.5. Being a person who works with Coherence, follows the developments around Coherence and believes in the capabilities of Oracle Coherence for building reliable scalable architectures, I was definitely willing to review this book and to bundle my feedback in a blog posting afterwards.

The book starts with an overview of what scalability, performance and high availability means and (also important) what not, what it does involve, how you achieve it and how Coherence fits into this picture. After explaining the basic concepts of performance, scalability and availability the book proceeds with explaining how easy it is to start off with Oracle Coherence to build your first 'Hello World' Coherence application.

After this overviev the book starts with the 'real stuff'. First a detailed overview is given about the different type of caching topologies and the options Coherence provide to decide for the right caching strategy. The book also provides so called 'When to use it?' chapter per topology in which it explains for which typical applications the topolgy is applicable and should be used; very useful. The following chapter explains how you can define your domain objects and make them Coherence-aware. In this chapter it uses the concepts of Domain Driven Design as the basis to construct the domain model for a sample applications. For me personally, this chapter was an eye-opener regarding building domain models for Coherence. One thing you should keep in mind, which i had learned before, when you start using Coherence is that you should keep things very simple and don't treat Coherence as an in-memory relational database. only put those things in Coherence grid which should be in the cache and avoid to think like you're going to build a database relation model. For example, store whole aggregates (Order, order items) in a single cache entry for the sake of atomicity, consistency and simplicity. The chapter also provides information about efficient object serialization using Coherence Portable Object Format (POF) and object change managements using evolvable objects.

The book continues with covering the following topics that makes the book a complete Coherence
reference :

  • Querying the Grid

  • Entry processing

  • Event processing

  • Persistency

  • Coherence*Extend

  • C++ and .Net interoperability

My final conclusion about this book is that is an excellent book to start off with in order to make yourself familiar with implementing Coherence applications. The book is complete in terms of
'should-know' features and contains useful guidelines and best-practices.

The book can be ordered from here:
- PackT site
- Amazon site

Friday, May 07, 2010

How-to: Analyzing Out-Of-Memory issues in WebLogic 10.3.3 with JRockit 4.0 Flight Recorder

Oracle WebLogic Server 10.3.3 provides out-of-the box support for JRockit Flight Recorder (JFR); the new enhanced run-time JVM analyzer in JRockit 4.0 positioned as the replacement for JRA with the following points of improvement Always on, Better data, third-party application integration through an API and low-to-zero overhead. JFR integrates seamlessly with WLS 10.3.3 to produce recording images on demand or event-based to analyze and solve all kinds of JVM issues.

In this blog posting, I show how to capture automatically an overall WLS system image, including a JFR image, after an out-of-memory (OOM) exception has occured in the JVM hosting WLS 10.3.3.

Setting up WLS Diagnostic framework
To enable event generation by WLS for JFR, the Weblogic Diagnostic Volume property has to be set to the value low, medium or high indicating the amount of recorded events. The
Diagnostic Volume can be set in the WLS Administration console -> Environment -> servers -> YourServer.

Now we have started event generation by WLS for JFR, we have to configure a WLS Diagnostic system module with a watch rule and a notification so that image capturing is triggered whenever an OOM error happens. The image capturing mechanism captures the WLS system state together with the JFR buffered event data and generates a zip file in containing the JFR file in the image folder. The image folder is specified in the WLS adminstation console -> Diagnostics -> Diagnostic images -> YourServer

Go in the WLS Administration console to WLS adminstation console -> Diagnostics -> Diagnostic modules and create a new diagnostic module. I have called it JRFDiagnosticeModule. Click on the created diagnostic module and target it to the designated server (tab Targets). Go back to configuration tab and click on the Watches and Notifications tab to create a watch rule and a notification with following specs:

Watch rule

  • Type = Server Log

  • Expression = (MESSAGE LIKE '%OutOfMemoryError%')

  • Use an automatic alarm so that the rule is re-enabled each time it is triggered after a defined period


  • Type = Diagnostic Image

Make sure everything is enabled and the notification is associated with the watch. Leave every other setting to their defaults.

This is all we have to do in WLS 10.3.3. Before we proceed with triggering a OOM with a sample application, we first start the JRockit Mission Control (JRMC) application to verify that the JFR recording has been started. Execute the jrmc file in the folder /bin folder to start JRMC. Open the JVM browser and right-click on the WLS jvm and select view reports. In the lower-right panel you'll see that one recording has been started:

Generate an OOM error
To generate an OOM I've created a simple web application constisting of a simple page with a button that triggers a servlet that will execute the following code-snippet to generate an OOM in WLS 10.3.3.

List list = new ArrayList();
list.add("test string");

Deploy the web application to the WLS server and trigger the OOM by pressing the button on the page. After a few seconds you'll see this in console logging:

The console logging shows that the watch rule has been triggered and a image capture has been generated in the folder /servers/AdminServer/logs/diagnostic_image. Unzip the file and open the JRockitFlightRecorder.jfr file in JRMC.

In the JRMC console you are now able to analyze the root cause of the problem. For this obvious OO example (you can also get the root cause from the console output..but the intention here is to show the capabilities of JFR in general), you can have a look at the allocation tab in the Memory panel to drill down to class that causes the String object creation (It's just an example and the JFR contains a lot more information about Threads/CPU utilization and GC executions for example):

Also the Hot Method tab in the Code panel shows the servlet doPost method as a top listed hot method:

This simple and obvious example shows how easy it is to let WLS diagnostic framework continuously produce monitoring data for JFR that can be dumped to a JFR image when required, e.g. in case of OOM exception or other events. The default WLS diagnostic framework can be configured to collect a specific amount of events by using the coarse-grained Diagnostic Volume property. If you want extra events to JFR image you can start extra Recording by using the JRMC or using the command line. Also it is possible to use Java startup parameter -XX:+|-FlightRecordingDumpOnUnhandledException to trigger a JFR dump after a unhandled exception in the JVM.


Thursday, May 06, 2010

How-to: Building REST/JSON services with OSB 11g and JAX-RS

Last couple of weeks, I have read a lot of blog postings about REST services in combination with the Oracle Service Bus (like this posting). I've even made a first attempt to write a posting about this subject, but I got really constructive comments in return on that posting that I decided to write a complete new one. Thanks for the comments :)

I decided to play around with REST and OSB myself and choosed for JAX-RS (Jersey) as the Java technology to build my REST service and the brand new OSB 11g release for proxing this REST service. In this posting, I will show you how easy it is to build a REST/JSON service with the feature rich and highly flexible JAX-RS standard, deploy it to WebLogic 10.3.3 and proxy it with an OSB 11g service.

REST service in JAX-RS
Required libraries (can be downloaded from here):

  • jersey-bundle-

  • jsr113-api-1.1.jar

  • asm-3.1.jar

I build in JDeveloper a very simple REST product service with a single method to find a product by its id. The REST method returns a JSON representation of the product.

First you have to create a new web application project in JDeveloper, because the JAX-RS product service is deployed as a web application to the WebLogic Server. In the web.xml file the following things has to be specified:

  • the JAX-RS servlet which will handle all requests and forward them to the appropriate REST Service class.

  • The context path to access the servlet

  • The mime type application/json

The web.xml has the following content:

version="2.5" xmlns="">
web.xml for





JAX-RS Servlet


Now the web.xml file is in place, two Java classes have to be implemented:

  • ProductResource

  • Product

The ProductResource class will contain the method to find a product by its id:




public class ProductResource {

public Product getProductById(@PathParam("id") int id){
//Return a simple new product with the provided id
return new Product(id, "DummyProduct");

The code listing above shows that that getProductById method is only accessible through the HTTP GET method. Also the use of the @Path annotation on the class and method level makes it possible to set a specific relative-uri with which the service can be accessed. The {id} serves as a placeholder for the product id and can be accessed through @PathParam annotation

The Product class is shown in the following code listing and uses the JAXB XmlRootElement binding annotation to automatically map the class structure to a JSON structure. Isn't that cool :). More JSON serialization and deserialization options in JAX-RS can be found here


import javax.xml.bind.annotation.XmlRootElement;

public class Product {

private int id;
private String name;

public Product(){


public Product(int id, String name) {;;

public void setId(int id) { = id;

public int getId() {
return id;

public void setName(String name) { = name;

public String getName() {
return name;

Coding is completed and it should be clear now that the use of annotations makes it very easy and flexible to create REST services with JAX-RS. Create a WAR deployment descriptor and make sure you add the jersey libraries to the WAR file and set the JEE web context root to services. Finally, deploy the WAR file to the WebLogic 10.3.3 server. I target the application to the osb_server1 managed server, which hosts my OSB 11g installation.

After deployment you can test the REST service using this URL where I use 1 as the id:


OSB 11g proxy
I used the brand new OSB 11g installation for creating a proxy service for my REST service.

The implementation is fairly simple and straight forward and follows more or less the same steps used in this excellent posting:

Business Service
The business service invokes the REST product service. Make sure you use the messaging service type with the HTTP transport protocol. Also set the HTTP method to GET.

Proxy Service
Create a proxy service in Eclipse and use the messaging service type with the HTTP transport protocol. I used request type none and Response type text. In the message flow I added a routing action to invoke the business service.
I need to mention two important things about the my Proxy service implementation

  • Set the endpoint /osb-services/products. This enables you to add anything to this context path, for example /{id}. This makes it also possible to use the OSB service as a proxy for different types of REST calls (make sure you can switch between HTTP methods in the message flow) to the same product service

  • Use the transport request element from the $inbound variable to append the relative path after /osb-services/products to the REST service endpoint. I've used an insert operation for this:

Deploy the OSB service to the OSB 11g server and use the test console to test the service. Make sure you set the relative-uri attribute in the test console transport panel:

The $body element in the response message should look like this:

The OSB service just passes the JSON response forward to the client. It is fairly simple to convert the JSON output to XML and vice versa using JSON lib in a Java service callout. How to do this is described in this posting.

Also the JSON structure returned by the product service is very simple. In most cases you have to do more work with JAX-RS in order to construct the required JSON structure. The JAX-RS libraries contain options for configuring JSON

Sunday, March 07, 2010

How-to: Oracle Service Bus 10gR3 - Oracle FMW B2B 11g interoperability

In this posting I will describe how you can integrate Oracle B2B 11g and Oracle Service Bus 10gR3 to send messages to Oracle B2B 11g from an OSB 10gR3 service.

The most easy way to integrate Oracle B2B 11g and OSB 10gR3 is by using JMS. Oracle B2B 11g supports JMS as protocol for its internal inbound and outbound delivery channels out-of-the-box.To enable JMS set the Use JMS Queue as default property to true in the Administration -> Configuration tab in the Oracle B2B 11g management console in order to switch on the usage of the JMS queues B2B_IN_QUEUE and B2B_OUT_QUEUE.

I have reused the ebXML configuration that I have described in a previous blog posting in this example. In Oracle Workshop 10gR3 (shipped with OSB 10gR3) you now have to make a simple OSB service that accepts a message through a proxy service and enqueues the message on the B2B_OUT_QUEUE of Oracle B2B 11g using a business service. Below I will describe the details that require special attention while implementing the OSB 10gR3 service.

Business Service - JNDI string
The BS serves as a JMS wrapper for the B2B_OUT_QUEUE. Use the following JNDI string to locate the
ConnectionFactory and Destination:

jms://(ofm b2b 11g host):(port)/jms.b2b.B2BQueueConnectionFactory/jms.b2b.B2B_OUT_QUEUE

Proxy Service - message flow
Oracle B2B 11g expects several user header properties in the JMS transport header to be set. A list of the required properties can be found here. You have to use the property names defined in the second column. To set the user header properties use the Transport Header action in the request lane of the Routing action (assumed you use a Routing action). Click on Add Header to add a new header property and select Other to define a custom header property. Add header properties for all listed properties in the table expect the last four in the table (at least I didn't set them)

Here's a screen shot of my OSB 10gR3 configuration:

After you have completed the message flow publish your configuration to the server and use the SB console test functionality to execute a test.

Wednesday, February 17, 2010

How-to: Archiving Oracle FMW B2B 11g run-time data using data pumps

In Oracle Fusion Middleware B2B 11g enhanced procedures are introduced to archive and/or purge the B2B 11g run-time data.

The procedures that take care of archiving/purging live in the SOAINFRA database schema of your FMW 11g database repository:

The B2B_EXPORT_JOB procedure does the actual archiving and is invoked from the B2B_ARCHIVE_PROCEDURE procedure. The B2B_EXPORT_JOB procedure uses a data pump to archive the run-time data to a file on the file system.

To make use of data pumps, you first have to grant SOAINFRA with DIRECTORY object privileges in order to allow SOAINFRA to create a DIRECTORY object that points to the location on the file system in which the data pump will write the archive file (the name of the DIRECTORY object should be B2B_EXPORT_DIR):

-- I tested it on my development installation using XE, hence the 'dev' prefix
GRANT create any directory TO dev_soainfra;
GRANT drop any directory TO dev_soainfra;

after granting the right priviliges you can create the directory object with:

create or replace directory "B2B_EXPORT_DIR" as '(absolute patch to the location on the file system)'

The following script will archive and purge all the run-time data for completed messages from 19-02-2009 until 19-02-2010 and will archive the data in the file called 'b2b_runtime_export.dat':

b2b_archive_procedure(to_date('19-02-2009', 'dd-mm-yyyy'),to_date('19-02-2010', 'dd-mm-yyyy'),'MSG_COMPLETE','b2b_runtime_export.dat','Y');

The standard archiving script provided by Oracle FMW B2B 11g can easily be extended to fit your specific needs or to be merged into existing archiving procedures.

- Oracle FMW 11g documentation

Monday, January 25, 2010

Book review: Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

In December last year the publisher of IT books Packt Publishing contacted me to ask me if I was willing to read their newly released book about Oracle Enterprise Manager Grid Control:

Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

They've selected me based on the contents of my blog.

The book is well structured and written in a clear language. The book starts with explaining the main features of Oracle Enterprise Manager Grid Control and how they address common administration tasks and make the life of a system administrator easier. The book continues with defining and describing the main components of Oracle Enterprise Manager Grid Control and how they work together to enable Oracle Enterprise Manager Grid Controll to fulfill its tasks. In the subsequent chapters the book covers the main Oracle middleware components and explains how they can be managed with Oracle Enterprise Manager Grid Control. The following components caught my specific attention:

  • Oracle BPEL Process Managers
  • Oracle Service Bus
  • Oracle Weblogic Server
  • Oracle Coherence

Each chapter contains detailed information about how to configure and use specific Oracle Enterprise Manager Grid Control features, like notification management, automatic provisioning and managing configuration inconsistencies. Additionaly the book cover topics like CAMM, AD4J and how to write your own monitoring plug-in.The book concludes with a best practice chapter.

To be honest, system administration is not my main focus area. I'm more on the software development side..Therefore my main intention to read this book (Oh..I agreed to review this book..) was to learn more about the concepts, capabilities and benefits of using Oracle Enterprise Manager Grid Control to manage the Middleware layer to broaden my view on middleware management (the area in which I mainly develop software). After reading the book, I can only conclude that this book really has helped me to understand the the true capabilities and benefits of Oracle Enterprise Manager Grid Control and how it can be used to manage Oracle Middleware components. This book is really a good starting point for everyone who wants to learn more about Middleware Management with Oracle Enterprise Manager Grid Control

You can find more info about the book and how to order it on the book's homepage

Wednesday, January 13, 2010

How to: OSB - FMW SCA 11g interoperability supporting transaction propagation

Currently, the BPEL transport in OSB is not supporting FMW 11g. However, I just found a way, although it still is proven in theory based on my knowledge, to enable transaction propagation between OSB and FMW 11g SCA composites. The basic idea is that you have to communicate between OSB and FMW 11g SCA composites using the SDO - EJB binding in 11g. The t3 protocol used as the communication protocol between the ejb client and the SCA (soa_infra) engine should take care of the transaction context propagation.

This should do the trick until OSB gets native support for FMW 11g interoperability. One disclaimer; I still have to prove my theory by running a test, but I am quite sure that it will work so therefore I shared it already here. I also still have to elaborate on which MEPs could be supported with this solution, so any thoughts are welcome.

Sunday, January 10, 2010

Oracle FMW B2B 11g: How to collect HTTP header info from inbound messages using Java Callouts

In Oracle FMW B2B 11g the Java callout functionality makes it possible to add Java hooks to an inbound or oubound message flow. Callouts van be written and configured per agreement or per delivery channel (transport callouts). More info about managing callouts can be found here

In this blog posting, I will show how you how an agreement callout can be used to collect HTTP headers from an inbound message, which is received by the default B2B transportServlet. I use an agreement callout because it is not possible to define a transport callout for an inbound http host channel.

The callout implementation is rather simple:

import oracle.tip.b2b.callout.Callout;
import oracle.tip.b2b.callout.CalloutContext;
import oracle.tip.b2b.callout.CalloutMessage;
import oracle.tip.b2b.callout.exception.CalloutDomainException;
import oracle.tip.b2b.callout.exception.CalloutSystemException;

public class HttpHeaderAgrCallout implements Callout {

    public void execute(CalloutContext arg0, List input,
                        List output) throws CalloutDomainException,
                                            CalloutSystemException {
        try {
            CalloutMessage cm1 = (CalloutMessage)input.get(0);
            System.out.println("parameters - "+cm1.getParameters().toString());
            CalloutMessage cmOut = null;
            String msg = cm1.getBodyAsString();

            String headerStr = cm1.getParameters().toString();
            System.out.println("Print transport header ::");

            cmOut = new CalloutMessage(msg);

        } catch (Exception e) {


The getParameters() methods returns all the HTTP transport header attributes in a Properties object.

Compile the callout code and deploy it to a jar file. The jar file has to be copied a location that can be accessed by the B2B server (or all B2B nodes if you have a clustered environment). Have a look at the B2B callout documentation to find out how you configure the callout with your inbound agreement.

It is wise to put Agr or Transport in the callout name to make a clear distinction between the types of callouts as they appear together in the callout selection drop down list in the B2B management console.

So why do you need this anyway..well for example to extract a specific HTTP header attribute that is, for example, set by the front-end HTTP Server and enrich the message with it.